[liberationtech] New Yorker debut's Aaron Swartz's 'Strongbox.'

Eleanor Saitta ella at dymaxion.org
Thu May 16 16:26:28 PDT 2013


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 2013.05.17 00.05, Fabio Pietrosanti (naif) wrote:
> I like deaddrop uber-paranoid approach. I'm just convinced that's 
> overkill, designed to be excessively scarifying usability &
> efficiency, thus not being suitable for the many uses that we'd
> love to see starting up their anonymous whistleblowing
> initiatives.

This is a system designed in a Western context for the use of
rich-world professional media organizations.  Yes, it's not going to
be achievable for everyone.

> Is very important, in my own view, to let an ecosystem of
> initiatives to start with few or no effort because it's better to
> have 10.000 diverse, distributed whistleblowing sites rather than
> few big and complicated ones.

What level of risk is it appropriate for organizations to expose their
(indirect) users to?  What level of risk mitigation do you as a
software developer have an obligation to those individuals for?  I
don't ask this in a flip way.

Democratization of access doing things like running a whistleblowing
system is great.  On the other hand, encouraging people to start doing
activities by making things easier when you know people aren't going
to be able to properly defend themselves is maybe a bit problematic.
Obviously, it's their call, but as a tool-builder, you're not isolated
from that decision.

This is a question that runs through a lot of our field right now.  If
you release software that encourages high-risk behavior (like, say,
secure communications for activists) but don't do basic due diligence
(like getting it audited and fixing the identified issues), this is a
problem.  If we teach people how to do some secure communications and
thus encourage them to talk about risky things online but we know
they're not actually going to know enough to stay safe, have we raised
awareness, or just put them in danger?

> That kind of "enemy" (corporate or nation state security) would
> attack the organization and the people, not the server (placed in a
> "unknown location" behind a Tor Hidden Services).

Not necessarily.  It's often very expensive for governments in terms
of PR for them to come after media organizations directly.  Using this
example, if the FBI sends a subpoena to the New Yorker for the
contents of this system, a bunch of journalists dutifully troop off to
jail instead of turning the system over, and the case blows up to the
front page of every single newspaper in the country for a week.  A
corporation has even less recourse -- they likely can't even sue until
something has been published, and then often the most they can do is
throw a libel suit around.  This isn't true in every context, but
different avenues of attack always have different kinds of defense.
If you constrain your adversary in terms of what actions they can
take, that's a victory.

Separately, if you're not trying to defend against nation state or
corporate security forces, exactly who are receiving leaks on?

> And "if" that "enemy" would attack the servers, it would reasonably
> do it only after many weeks or months that the "incriminated
> submissions" has been done, after the "information has been already
> leaked and published".

This makes no sense.  Why would they do that?  If they don't know
about a leak, sure, but that's not always the case, and there are
times when an organization might want to just keep an eye on what's
going through a server like this.

> Regarding compartmentalization, that's to be done trough proper 
> system/filesystem/network sandboxing system for efficiency purpose,
> by using SELinux/Apparmor/Iptables modern systems. Even US NSA
> abandoned most "physical compartmentalization" practices by 
> applying "logical compartmentalization" (see NSA Mobility Package
> or NSA Trusted Systems as examples).

No, they didn't.  They offer non-compartmentalized tools for some
situations.  SIPRNet workstations are airgapped from NIPRNet, etc.  VM
breakout attacks are a very real thing and the notion that virtual
separation is sufficient for compartmentalization when under serious
attack is very, very dangerous.

Obviously, it should go as read that there are tradeoffs here, and I
agree that this design is suitable for specific scenarios, not everywhere.

Again, though, why the emphasis on a single machine?  I can understand
saying that there should be a lower admin bar -- that seems entirely
reasonable, but hardware is *cheap*, especially when you're looking at
very low throughput use cases.  Cheap isn't free, even in the
developing world, but a "server" here can be something as light as a
raspberry pi.  Humans are the expensive part of any deployment
scenario for a system like this.

> In that scenario if the "journalist workstation is compromised"
> also the "scope of his investigation is compromised", regardless
> the "secure viewing workstation is secure". If "national security
> forces" are listening to "journalist workstation", they know what's
> going on.

They know some things, sure.  Compromise is not an all or nothing
issue, and knowing the subject of an investigation or even that there
exists a leaked document isn't the same as having full access to the
document along with metadata and conversation information that could
compromise the whistleblower.

The point of the layers in a system like this is defense in depth.
This is at the core of any security practice.

> Additionally if the journalist "find something", before or later,
> he will need to share it, by bringing it to the "journalist
> workstation", breaking the security model.

Why?  They need to share the story they write, sure, but that doesn't
mean a leaked document ever needs to touch an insecure machine.  Even
if it does, they can bring over only the documents they need, only
when they need to do so.

> When end-user  find a security measure to be not suitable for them,
> they will just "bypass the security procedure".

Sure.  And this is absolutely fine.  Users get to decide what level of
risk is appropriate in a given situation and their decision is final.
 However, the converse is also true -- users can't take advantage of
protections not offered to them.

The point of a system like this isn't to create ironclad rules that
are never broken, it's to create alternatives.  Some times a
journalist is going to have a leak land in their normal inbox, and
that's fine; sometimes they're going to decide that owned or not, they
need to open that excel spreadsheet right now on their main
workstation, and that's their call.  However, if we can get them to do
something else 95% of the time and give them easy ways to notice and
recover when they screw up, we're all doing a lot better.

This is all about the averages, not the absolutes.

> When you "split" the working environment between "secure" and 
> "unsecure", then the "secure" one will slowly be abandoned. I agree
> that's a bad practice, but it's a human attitude, and it will 
> happen.

This is a cultural question, not a global truism.  There are plenty of
places where this isn't true, and assuming that segregation of systems
for different purposes always fails puts us at a great disadvantage
when we attempt to figure out how to design usable segregated systems
that interact well with a given organizational culture.

> This is a controversial topic because the "metadata" may be one of
> the few source of information that will let the journalist to make 
> appropriate correlation to identify that the data are good. From an
> investigative journalistic point of view metadata should not be 
> scrubbed by default.

Of course not; no one is suggesting that.

> Metadata maybe an extremely valuable piece of information and
> should be really up to the journalist to evaluate whenever those
> should be scrubbed or not.

Absolutely!  It's as important as it is dangerous, which is why it
should be kept somewhere safe unless it's necessary to reveal it in full.

>>> So i really think it's unrealistic to handle dozen or hundreds
>>> of submission per month by copying received data offline,
>>> decrypting and analyzing it offline trough a different
>>> workstation.

Really?  It takes maybe ten minutes for each one, being generous.
It's unreasonable for an organization running a leak dropbox to spend
a few hours a month looking at what lands in it?  The time it takes to
figure out whether a given submission matters is going to massively
outweigh the time it takes to handle it securely.

>> What do you base your assumptions of submission rate and workload
>> on?
> An anti-corruption initiative i spoke with, got more than 2000 
> submission in one year.

Great!  That's a big submission feed, and the overhead, at a guess,
would be about 300 hours to be able to handle them securely.  Let's
assume that much time again in sysadmin time.  That means they need
about a third of a full-time staffer to be able to give their sources
a much higher level of assurance that their submissions aren't being
immediately sent back to the people they're trying to hide from.  If
they can't handle that and their sources are taking serious risks to
come to them (which they almost certainly are), maybe they should
reconsider whether or not they're actually in a position to do this work?

We live in a world where it's reasonable to say that most systems used
by journalists doing work at a certain level of influence (c.f. the
New York Times, etc.) are probably compromised at least some of the
time and where professionally written malware is being used
offensively on a continual basis.  Given this, we have little hope at
the present of ensuring that an adversary can't get at documents
stored on a normal use system.  There are incremental improvements
that we can make (and those are very, very important -- I don't mean
this to be a "we should give up all hope" comment) but we also have to
deal with that world.

> Additionally we should consider that when a whistleblower does a 
> submission, he expect to get a feedback "quickly". He is nervous,
> risking a lot, don't know if what he is doing is the right or wrong
> things, and he cannot just wait the journalist to get back to the
> office to check his "secure workstation" to download a submission 3
> days later.

I don't see why having a separate system affects latency much, unless
you're expecting journalists to be evaluating leaked documents on
their phone.  If you are, you should seriously reconsider; that's a
terrifying prospect.  There's also no reason why a system like this
can't have multiple people monitoring it; I'd assume that as a given.

> A Document Management System is used in any editorial information
> system of a magazine, so this maybe needed to cooperate also on
> documents where investigative journalism activities is carried on.

That doesn't mean that sensitive source documents need to be entered
into it directly.

> Lowering the entrance barrier by still keeping a very high
> security level is a the target i'd like to see reached.

Non-airgapped systems do not represent a very high level of security
in this space.

> "Is more secure a system with grsecurity's custom kernel that
> require high skills and a painful procedure to upgrade or a system
> without grsecurity where you can schedule automatic-upgrade?"

That depends on what kind of sysadmin resource you have available.
Obviously, a system that you can't keep patched is likely less secure,
but there's no single easy answer.  In general, if you don't have the
resources to maintain a system, you probably shouldn't be running it
for things that may be life-safety-critical for some of your (possibly
indirect) users.

> Are you coming to OHM2013 http://ohm2013.org in the netherland
> this summer?  It would be a good occasion for the liter of beer
> said previously :-)

Nope.  C.f politics & police, but that's a different thread.  Some
other time though, sure.

>>> I just find it overkill for a general use.
>> What's "general" use?
> General use are for example: - A Citizen Media - An independent
> media in a closed society - An Investigative journalism group - A
> political activism action/campaign
> 
> Those kind of users have very low resources and represent, imho,
> the foundation of the future of digital whistleblowing that we
> should foster. I hope to see thousands of whistleblowing
> initiatives up and running in few years, for each context, in each
> country, for each sector.

These organizations all have vastly different needs and capabilities.
 A single individual, unless they're technically inclined, is going to
need a really simple system, yes.  For the rest of them, though, it's
going to depend on the group.  If you're talking about an activist
group that's three guys in an office, that's one bar; the ACLU is a
different one.

There are some stories that, if they land in the hands of a small
organization, they're not going to have the resources to deal with.
This can be anything from a large-scale financial corruption scandal
that will require weeks of forensic accounting time to a serious,
national-security-level whistleblowing event.  This is fine.  Not
every organization can cover every story, and organizations should
(and often do) know when they're out of their depth.  Leaking systems
are no different.

For many kinds of story that might be leaked and many organizations in
this category, this is an entirely reasonable level of preparedness.

E.

- -- 
Ideas are my favorite toys.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (MingW32)

iF4EAREIAAYFAlGVayQACgkQQwkE2RkM0wqfEwD8DqfLUHGvjj5a5UaeWCTwnGg0
YFBToQUHuhFsfVPmpgIBAJ0bsCjf4OmpcQxu2Xla/QugkT27mfAlhOWVkUu5Ir4a
=YuBi
-----END PGP SIGNATURE-----



More information about the liberationtech mailing list