[liberationtech] OkayFreedom
Jacob Appelbaum
jacob at appelbaum.net
Mon Oct 29 05:26:54 PDT 2012
Eric S Johnson:
>> misremember the entire discussion; it happens to all of us!
>
> I imagine we each remember what best supports our own point of view.
> I'm sure it happens to all of us!
I went back and read the emails we exchanged; did you?
>
>> open at the moment for those in the US is if we will have some kind
>> of justice for this spying on all of us. It sure seems bleak.
>
> Yes, it does. I hope all the Amcits on this list have voted (or will
> do so)!
>
Sadly, nothing on any ballot I've seen mentions these issues or even has
candidates who will address these issues. Quite frustrating.
>> to make their own choices, to show data and stories about lessons
>> we've learned the hard way, and when we are able, to offer
>> solidarity where it is possible and welcome. What matters is that
>> users must be protected against serious attackers.
>
> Agreed.
>
I'm glad that we agree here - I am not entirely sure of the best way, or
even a good way, to express some of these stories. At best, I find that
local stories of a specific person seem to do the trick but it is quite
hard to tell such a story without having an example. The deep irony is
that the risks exist before a single person is jailed - the same story
doesn't need to be relived but it seems that it must be regularly repeated.
>> I personally feel like it is often suggested that the burden to
>> show something is unsafe is on us.
>
> You assume everything is unsafe.
That is incorrect. I think that everything is *unknown* and that a
vendors claims are often overly boastful. Thus something *might* be safe
in some *cases* but a vendor claim is *not sufficient* as evidence of
security, privacy, easy of use or actual running quality.
> Saying "telephones are dangerous.
> VPNs are dangerous. Anything Microsoft is dangerous. Everything's
> dangerous"--well, okay, sure, so is walking across the street (let
> alone just breathing, especially for those of us who live in China).
These things aren't binary. They must be measured on many different
metrics. Many extremely negative metrics for a given technology probably
do indicate that it is dangerous to use in say, Syria if your location
is confidential and you are being hunted by Assad. That doesn't mean you
shouldn't make the satphone call - it just means that *you should
knowingly* make that choice. Informed consent is important.
In your examples, I think you're thinking in a binary manner that
doesn't really make sense - you oversimplify my concerns to the point of
making them meaningless. So lets get down to brass tacks, shall we?
A telephone is insecure as a matter of fact. It can be wiretapped, it
can be intercepted and tampered with in transit, it may have incorrect
Caller ID/ANI, it may have really bad echo, it may have poor audio
quality and so on. Does that mean that telephones are worthless? No,
absolutely not. Does that mean that it is reasonable for a journalist to
talk to a source over the phone about confidential things that will get
the source arrested and jailed? It depends. Should a journalist just
decide everything is fine? Probably not but hey, there are lots of
journalists who are terrible at *everything* they do!
Is it possible to make a call and have no one intercept it? Perhaps,
where are both parties working? Is it secure? No, probably not.
Is it dangerous? It depends! Is it safe absolutely? Nope!
Some kinds of VPNs are _totally_ broken, others might be alright, still
others are unknown but seem to be OK. I have some doubts about a lot of
choices people make here. As an example, I went through and looked at a
bunch of pay for VPN services that use standard VPN protocols, something
I think is a *good* sign - rather than say, OkayFreedom, which does so
but doesn't tell anyone. The results were hilariously sad - here's a
bunch of shared secrets for many popular providers:
https://twitter.com/obisw4n/status/249646520067031040
https://twitter.com/marshray/status/249642980313554944
https://twitter.com/ioerror/status/249645991999991808
https://twitter.com/ioerror/status/249645223213404160
https://twitter.com/ioerror/status/249644598778007553
https://twitter.com/ioerror/status/249647806166487040
https://twitter.com/ioerror/status/249648046747574272
https://twitter.com/ioerror/status/249648741768912896
https://twitter.com/ioerror/status/249648882588475392
https://twitter.com/ioerror/status/249649014537068544
https://twitter.com/ioerror/status/249649401847500801
https://twitter.com/ioerror/status/249649654189416449
https://twitter.com/ioerror/status/249649822024470529
https://twitter.com/obisw4n/status/249649648552259584
https://twitter.com/obisw4n/status/249651084082180096
https://twitter.com/obisw4n/status/249651595166498817
https://twitter.com/peter_tonoli/status/249668113489096704
https://twitter.com/jedisct1/status/249689070593245185
So are VPNs dangerous? If they're not using OpenVPN with certificates
and *a known and trusted provider* or IPSec with certificates, I'd say
yeah, it is dangerous. Is it good enough? In some cases, sure! Without
actually understanding the details though, I'd be extremely worried
about projecting our perceptions of risk, concern and so on. Who knew
they needed to worry about FinFisher's upgrade hijacking technology
before they started to deploy it to "law enforcement" agents?
Part of having defense in depth isn't just being reactive - it is
thinking about these problems pro-actively. So sure, you might just need
to use PPTP but really, why? Why would you do that? What if you are
totally wrong? Everything fails and it fails spectacularly! Pick a
protocol that isn't garbage and well, you'll still need to plan for a
time when it breaks or fails to work as expected. At least however that
time has yet to come in public for the technology, while we do not know
if that time has come for the *people* running the service.
If you're working in China, I do think using Microsoft products is
pretty nuts. I'm sure you've heard of Titan Rain - this is the code name
for the series of attacks against US Government computers by the
Chinese. They regularly wrote exploits for unknown bugs and completely
compromised computers across the board. It is thought and pretty well
established that the actors writing those exploits were Chinese - so,
they've proven that they can compromise Microsoft products as sport;
perhaps it might make sense to consider that they'll use those exploits
against journalists too?
Diversification and playing to their strengths is perhaps not a good
strategy. That doesn't meant that Microsoft is in itself bad - their
code quality is seriously better than others in most, if not all ways.
The problem is that when it isn't, only the attackers see it and fix it.
And if they already have those exploits sitting around in an exploit
pack, well, game over.
So is it safe to use Microsoft products when dealing with targeted but
largely opportunistic attacks in China? I doubt it. Is it dangerous? In
that context, yeah, I think it would be a safe *guess* to say yes, it is
dangerous. Now to be clear, if you're under targeted attack, OpenOffice
is not going to save you against anythign except the lamest of attacks.
A talented attacker will find bugs in OO in minutes.
You talk about walking being dangerous and yeah, I've been to China too
- it is epic to just cross the street. Does that mean you'd suggest that
you can't do it extremely badly?
I ride a motorcycle and it is dangerous. Still, I mitigate the harm with
safety gear to protect against likely harm, educate myself on courses of
action in an emergency and I know that I'll still die if a large truck
hits me at full speed. There isn't anything to do about that kind of
back luck other than avoid contextually stupid things.
> But if you have only ten minutes to get this journo in Gyanja, Gomel,
> or Gonder to do something different, even you (let alone the rest of
> us relative neophytes) aren't going to be able to get him using
> TAILS.
If the first five minutes are listening to the journalist and it was
reasonable, I'd take a few minutes to respond. In a five minute
discussion, I wouldn't get technical at all. I have given *five* minute
talks where I specifically give *positive* examples that I think do
improve security on an activity by activity basis. I gave an Ignite!
talk in Seattle a few years ago about the "Going Dark" issue - I think
it went pretty, I'd encourage you to watch it.
It was a tools based discussion for only the second half - the first
half was a discussion of how unsafe most things are by default. This in
itself is totally contrary to expectations - people often expect things
to be secure and are *amazed* when someone shows them how things
actually work.
So I think the trick if you have five minutes is to encourage them to
use their big brains by *inspiring* them to dig deeper and by *showing*
them that these five minutes are just scratching the surface.
> So, we have to prioritise. One way to prioritise is to assign
> various levels of likelihood to the possible threats. And one way to
> do that, in turn, is to assess what we do know about the threats
> which have proven problematic in the past.
You keep saying this but how is that happening? Shi Tao has been in jail
for how long now? What lessons of his suffering are being brought
forward? That was *merely* an IP address!
> Sure, we don't know what
> we don't know: epistemology and all that. But we can tally up what we
> have learned, and use that as a basis, however imperfect, for saying
> to the activist from Gweru: if we only have ten minutes, the goal is
> to move toward mitigating problem X (and we'll only be able to
> provide the simple solution which takes partial care of the
> problem--not a solution which would keep the NSA off Jake's back, but
> a solution which is likely to make this particular person safer). If
> we have an hour, we should be able to help mitigate X, Y, and Z.
> Ideally, we'll have three days, and then we can help mitigate all 15
> top problems.
If I accept your time constraints, I'm still not sure that problem X has
been mitigated - I mean, this is so vague as to be pointless Eric.
We are not able mitigate the threat of the phone networks or of
surveillance on the internet. We cannot do this because we are required
to interface with both of these things all the time without options for
protection. They are both insecure by default, even if functional and
awesome at times.
> To "there's no point in anything less than
> perfection"--well, yes, we'll have to agree to disagree on that. I
> think there's huge value in getting someone to use a solution which
> is "more secure" in their particular context (ideally we get that
> knowledge from on-the-ground research in addition to reports in
> Western media), even if it's not a perfect solution.
>
I do not generally advocate for perfection. Perfection is the enemy of
Good Enough. I'm advocating *against* things that are *known* to be not
good enough, *against* things that are *unknown* until there is evidence
and *for* processes rather than products.
We disagree on the very last part of that at least. I find it
frustrating that practically, we also disagree about the things known to
be broken in some contexts - it seems that you still think them fine to
use because you have no evidence of the locals knowing about that
brokenness. That is not a safe strategy. When things have buffer
overflows, lack digital signatures, use broken crypto, run on extremely
insecure systems, etc - we can't project total incompetence on
attackers. There is a term, "Windfall hacking" - the wind blows in the
right direction and a bug falls from the sky - it really applies here.
When user security depends on the direction of the wind, we might try to
use/build a thing that is safe regardless of direction.
So please don't put words in my mouth Eric - I am not advocating for
perfection or nothing. I am suggesting that harm reduction is better
than ignoring and refusing to examine the facts about these systems; I
am suggesting that assuming good faith is dangerous and to assume
neutrality is probably safer. Furthermore, I am suggesting that when
only extremely talented bad guys can examine internals easily, our users
have even less of a chance at staying safe.
> What I don't get is why you work so hard to discredit folks rather
> than educating them.
I'm trying pretty hard here to engage with you about issues where I
think a rational discourse is missing. You selectively quote what I
wrote, refuse to engage and put words into my mouth. I have spent more
than a decade trying to improve not only this discourse but also the
landscape of technical possibilities. Many others have done so - nearly
all more than me, I might add!
So you're telling me that I should try to educate folks, sure I agree. I
do a lot of that! Frustrating, I feel that you're spending your energy
refusing to even discuss or entertain my point of view, my concerns or
nearly anything I've said about these processes.
You take it as a personal attack, which is not even remotely my goal. I
disagree with your processes and often your stated conclusions. To give
an example, I think that you are one of the few people I've met in this
space who as a *person* I do respect.
As shocking as this might seem Eric - you seem to address the issues on
this list not from a place of nationalism or racism but from a position
of wanting to improve things. You live in China, you work in China, you
work with people on the ground all over the world. That is rare and
honestly, part of why I have so little respect for other people pushing
bad ideas - they don't even bother to stick their neck out. I don't have
that view of you at all - that is why I'm emailing with you now. You are
exactly the person that will make a positive impact when we come to
better understandings about our different views.
> All of us on this list know you're a God
> (despite your sarcastic "perhaps I'm just dense").
It wasn't sarcastic and neither is this: no gods, no masters - I'm just
some guy who spends a lot of time on these things. Not even the best on
this list are gods, we at the best of times, are a community with
different specialties. There are people on this list who best me in
every single way - awesome. Who cares? This isn't a matter of ePenis
measurement or ego - it is a matter of rational discourse.
I am regularly wrong and I try my best to plan for it. The only way to
know is to actually discuss these things, lest we fall back on Good Guy
Badges and Brand Names.
> We all understand
> you know more about cybersecurity and cybersurveillance (never mind
> that you hate certain words) than the rest of us combined.
I dislike those words because they are _meaningless_ and lack the detail
to move the discourse forward. What does cybersecurity mean? Well, to
break it down, it means some kind of internet/electronic security thing
- but what does security mean in that context? Security relating to
everything on the internet? Bah.
When we use terms like that we aren't likely to be discussing things
meaningfully. Certainly not enough to really understand things to the
point of being able make intelligent decisions.
Certainly - I do not know more than everyone combined, I know _nothing_
by comparison but that isn't a meaningful comparison. None of us are as
knowledgeable as all of us.
> Everyone
> loves gaining from your experience (e.g. (just to name the most
> recent examples) your teardown of OkayFreedom, the VPN security paper
> to which you referred a couple days ago, etc.).
>
But what do they gain? A citation? Or an understanding? In our
discussion, I don't feel like we're gaining very much ground - I am
tired of being dismissed as paranoid and only wanting to address the
highest security levels. What we know in public is likely not secure for
dozens of reasons, so we should strive to not use the things *known* to
be unsafe in public for a variety of reasons. We should actually work on
building alternatives, rather than adopting a fatalistic attitude about
time limits.
All the best,
Jake
More information about the liberationtech
mailing list