[liberationtech] was: Forbes recommends tools for journalist; is now: depressing realities
Danny O'Brien
DObrien at cpj.org
Wed Dec 19 10:02:47 PST 2012
On Wed, Dec 19, 2012 at 05:26:05AM +0000, Jacob Appelbaum wrote:
> Hi,
>
> frank at journalistsecurity.net:
> > But if
> >> you're getting information security advice from a Forbes blog, that
> >> will be the least of your worries.
> >
> > Where would you suggest we get information security advice from?
>
> This is an interesting question and I admit, I feel like it leaves a bad
> ring in my ears...
>
> What kind of security advice? Who is following the advice? Does their
> context change while they follow this advice? Do they have resources of
> a user without more than a casual interest or are they well funded and
> dedicated? What are their requirements? What are their temporal
> tolerances? Do they understand safety plan or threat model without
> further explanation? What are the stakes for failure?
>
> The answer to each of those questions would shift my answers to
> subsequent questions around, I guess.
>
Just to add some notes to Jake's excellent points to broaden the
discussion. I hope I'm not thread-jacking, but it's Jake's comments
unlocked a lot of points that I've been thinking about recently.
Protecting Sources -- changing the relationship between reporter and source
One social act that journalists can adopt which has nothing to do with
technology, but everything to do with how technology has changed both
the threats and the opportunities of journalism, is to consider what
*has* to be known about a source. Traditionally, the role between a
source and a journalist has been that there's an inner sanctum of shared
information, and then a set of carefuly managed publically released
data.
For certain beats, there's all kinds of problems with this model at this
point. One is that technically and politically, it's getting harder to
protect the data in the inner sanctum, even within supposedly stable
open societies. Without exaggeration, we've accidentally built a
data-collection system that the Stasi would marvelled at, and then put
all the pressure against its misuse on statutory protections that have
little oversight, poor incentives and almost no track record of punitive
action.
Second, the management of the released information in order to protect
an identity is now practically a full-time security job in itself.
Forget protecting data that source and journalist agree is confidential;
even the information that has been agreed to be made public can be
compromising in ways that neither party could anticipate. This isn't a
question of ignorance, this is a question of how skillful we can now
collectively pool open source[1] info to deduce hidden data. It's a
precept of the security professionals I know that you simply can't
de-anonymise mass databases of information; what's unknown is how little
you can add to the wealth of already public information before a single
identity is uncovered. In that sense, I'd welcome this Forbes piece,
because it's the first time that I've seen wide public discussion of
this problem -- that this journalist revealed information about their
source through what both agreed should be made public. I'm pretty sure
McAfee didn't even realise that this was a threat, let alone the editors
and writers at Vice.
My point here is that among all of these threats, there's also
opportunity. Some of the Net-savvier journalists I know now take a
minimal-knowledge approach to sources; you don't need to know who the
source is in order to verify the information you've been provided. This
is a situation that is I think historically unusual, but is increasingly
common. You work with the data itself to confirm its veracity. You don't
need to know whether quarter of a million diplomatic cables were leaked
by a particular security analyst, because you can externally verify the
accuracy of the data.
There are a lot of challenges to this approach, but there advantages
too. It apparently increases the risk of being fed false flag info: but
it also prevents accepting false information through simply believing
authorities. It decreases the value of personal contacts in journalism,
but it increases the value of data analysis. But most importantly, it
helps both of the major problems in journalist-source protection. It
eliminates the requirement to preserve the inner sanctum, and aligns the
incentives of the journalist with the source to test and validate the
safety of revealing data to the public.
[1] in the old fashioned sense of open source intelligence
[2] speaking as someone who was asked by McAfee about how cellphones
triangulate location (I didn't answer) -- even if you have your name on
a security product doesn't mean you're an expert in all security.
Revealing our methods
I'm really really happy that Jake has talked a little about his own
procedures, because we're really bad at this as a community. There are a
couple of reasons for this, I think. The first is that despite all of
our talk about the dangers of security through obscurity, we're all
scared that revealing public information about our setup exposes us to
increased risk. Second, we're scared of looking stupid, or being exposed
to condemnation.
I think both of these concerns are valid. If I told you that I used
FreeBSD 7.4 on my server, say, and that I'm a big fan of
libpurple-driven OTR clients, it's possibly made it somewhat more
convenient to find out a way of attacking me, even though there's at
least a couple of ways that I'm emitting those facts almost constantly.
Second, if I *did* tell everyone I use libpurple, or that I have Skype
on my machine, I'd be extremely vulnerable to people pointing out that
libpurple is not exploit-free and that Skype is used as a vulnerabilty
distribution vector.
I don't know what to do about this. As a community, we jump on people
who publically reveal less-than-perfect security practices. But we've
all -- even if it is in retrospect -- realised risky things we've done
in the past. We can't learn from our mistakes, and worse, others can't
learn from our mistakes, unless we admit to them. We can't berate coders
for not exposing their programs to security audits, unless we have a
better way of sharing the practical knowledge we ourselves use every
day, and we're not going to do that if we just spend our time pretending
we anticipated the latest zeroday years before it actually came about.
More information about the liberationtech
mailing list