[liberationtech] FW: The security and ethics
Jacob Appelbaum
jacob at appelbaum.net
Wed Feb 9 15:53:12 PST 2011
On 02/09/2011 10:51 AM, Jim Youll wrote:
> Minor clarification:
>
> I ** do not ** believe normal people who want to use electronic
> resources to communicate unpopular, potentially self-endangering, or
> merely private messages, should have to RTFM and become students of
> "personal technological security" any more than I would expect my
> artist friends to become mechanics before they may operate motor
> scooters.
I don't either. I'm sure that I come off as an arrogant technical weenie
with some of my statements. It's quite a mix of policy people and
technical folks on this list. It makes the tone of email and the tone of
statements a bit hard to couch for maximum smoothness.
>
> I ** do ** believe that hopeful thoughts and unprovably-confident
> descriptions of complex systems are NOT a substitute for a proper
> explanation of what can go wrong or what "must" be understood to use
> a technology "safely". Any such explanations must consider not only
> the moment but possibly the future, with information used not only
> alone but in combination with other information from other sources.
> You don't leak a key but your correspondent does, because he made a
> mistake. Are you both going to prison tomorrow?
>
I entirely agree with this. It's really important to understand and
retrain some of our understandings - we intutively understand a lot of
safety issues. However, we also dismiss the efforts of people who put
tons of money into education. Seat belts and DUI education is a perfect
example. They're not arguing physics in those campaigns, they're arguing
about consequences - I think this is something most people feel that
they understand pretty well.
> I see people speaking and writing /in the role of experts/ but
> expressing hopeful thoughts, rather than candidly expert opinions and
> cold facts. I do speak out against the idea of hopeful thoughts
> trumping cold fact. Techno-security talk is, and should be, fussy
> like that. Experts should say what they know, not what the 'think'.
>
I obviously agree with this but I think it's fine to express thoughts
about intuition. For example, I would not be surprised if Skype had a
backdoor or a bugdoor, nor if they would push out an update for a
specific user to have such a backdoor. The architecture seems to allow
it and I feel like it is prudent to express this when Skype comes up.
> ====
>
> In the span of the past year the conversation has turned in a very
> useful, unprecedented way from "stop being so negative all the time"
> to "OMG that thing really COULD get somebody killed". This is a good
> start but now it exposes the gap in information that's been there all
> along. Didn't matter until now ... now that opponents of free
> communication and free thought have better technology than was
> predicted by the techno-hopeful - and they're using it without shame
> and without always hiding it.
>
It's an interesting thing to see the perspective shift. All of a sudden,
we realize that CALEA isn't just used by "good FBI agents" but also
abusive FBI agents, hackers, governments without due process, and so on.
Gosh, who would have considered that? Oh... I guess just the cypherpunks
over a decade ago!
> To an earlier point on this list about crypto / tech / security
> experts speaking in academic tones: don't shout those guys down. It's
> not their job to sugar coat things. Mathematicians and engineers -
> the good ones, anyway - speak in absolutes because their world is all
> about absolutes. In technology, a claim is either right, or it is
> wrong. An implementation of a protocol or algorithm is either
> correct, or it is not. Their fussing and complaining have no doubt
> already saved lives. Let them do what they do and be glad they
> bother.
Thanks.
>
> People are working on this problem of assessing what's available in
> the "stack" of { code, hardware, connectivity, third party services }
> and trying to sort out what combinations, if any, can be used
> "safely,"; what "safely" might mean; how to /describe/ what "safe"
> means in a way that can be used on the ground by normal people to
> understand their own situations. Apparently I'm going to be one of
> those people shortly. It's exciting and scary and maybe impossible to
> do as correctly as is needed. These days I would not want to be a
> software creator making a grand claim about the "code -> safety"
> function of my own creations. One cannot make these claims with
> confidence absent consideration of the always-unique (and now
> always-changing) environments in which that code is set into motion.
>
I think it's not about perfection at all times. However, I think we
should create solid goals and really try to reach them - it's also very
positive for users to simply understand when they are not reached. The
failure modes are very serious and the consequences may be deadly. This
seems certain in the context of so called liberation technologies.
All the best,
Jacob
More information about the liberationtech
mailing list