[liberationtech] Haystack and informed consent—A legal/philosophical response to Jacob's concenrs
Jim Youll
jyoull at alum.mit.edu
Sat Sep 11 15:46:14 PDT 2010
I had a (small) hand in this discussion early on, and the was pulled away by life, but congratulate all who have taken so much time to think about the many issues in play.
Most of what can be asked about Haystack has been. The answers may or may not come. They may or may not be sufficient or satisfactory.
If you read no further, please pile onto my hope that lessons learned about the study and critique of technologies for civil disobedience don't have to be discovered anew when the next subversive technology arrives. Technologists, philosophers, and writers may all learn something from this conversation that is useful now, and the next time.
On Sep 11, 2010, at 4:34 PM, Babak Siavoshy wrote:
> 4.) Haystack's testers were informed that there are significant risks involved with the program, and that they should know that they are playing with fire. Clearly, given the media attention, we can and should have done more.
The media attention wasn't the problem. Over-promising by itself is dangerous and messy, but not fatal.
But can there be any justification for live testing on human subjects without sufficient prior review, particularly with risky new tech? By definition, any life-critical system is risky until tested and proven to the extent that it can be.
> 5.) One fact that is lost in all of this is that we have had only a small number of tests in Iran, and with testers who generally have only one or two degrees of separation with us. Since we have received the license, the number of individual testers has been between 10 and 20. The number of people testing the program at any one time has been between 3 and 4. This has enabled us to be in closer contact (but not direct contact) with the testers, and appraise them of the risks involved.
What is the point of testing early software in Iran at all? I have to ask your reasons for testing there, rather than, say, in the US where there is no risk to anyone participating in a test for either (a) possessing software intended to subvert government regulations; (b) transmitting / publishing illegal or undesirable thoughts.
> 6.) I take it that all parties agree that the proper paradigm is informed consent--rather than a straw poll of the Stanford Lib Tech group--but that the principle disagreement is with whether we have made the proper disclosures. Lets concede this is true. How can we fix it? Please give us constructive suggestions.
Around Aug. 17th, Steve Weis, Jerry Leichter, and I all aired concerns on the list about Haystack.. The comments I imported were written by credible experts on a cryptography list with probably minimal overlap with Stanford Lib Tech. That list carried out a discussion of Haystack under its own power. The experts considered Haystack from a rigorously technical viewpoint... and found not much good to say. These are people who designed the systems that protect banking and online transactions every day - technology that works - and who sat on many of the standards committees that drive security both on the Internet and offline. They have considered the issues in depth over decades.
I imported a chunk of that discussion to this list because the ".. but it's for a good cause" defense has always been an issue for freedom-seeking technologists and the things they build. Intentions really do not matter if a thing does not work. Good intentions, bad outcomes... with this and similar technologies, there's no way to know how bad the outcomes may have been.
The real bottom line or principal disagreement (for those who care about technologies for secure communication)?
It's not about consent. I suggested a few weeks ago that your target[1] audience cannot provide "informed consent' because it is not sufficiently expert to calculate the immediate and long-term risks of complex technology, and the myriad ways information (identities, relationships) may leak over time. This is not an insult. The analysis of complex systems is not a judgement call but a matter of science and calculation, beginning with well-framed questions composed by experts.
The principal disagreement (I hope) is that there is no precedent for the deployment of sensitive, real-world security technologies that have not been dissected and perhaps ripped to shreds by a wide audience looking for both immediate technological exploits/shortcomings and deployment vulnerabilities at the intersection of human action and myriad technologies in the wild.
In short: has any other security/privacy technology been deployed in a risk environment without extensive, open, peer review _of its design_ by experts?
I am reminded this evening of a friend, a Ph.D. mathematician and analyst with many impressive accomplishments, who went about his work quietly and rigorously, and who didn't stop until he either solved a problem or knew it wasn't solvable within some bounds. I mention my friend because I had personal experience with him, and none with another scientist with similar prowess and confidence - Richard Feynman, who was part of the committee that investigated the Challenger disaster...
What would either of of these brilliant guys do when faced with questions like "Is Haystack 'safe'?" and "are these claims credible?" with or without a celebrity endorsement by the Secretary of State?
My friend Stephen would have sat quietly to work out and document the problems - not as opinions, but as science. Feynman single-handedly determined the cause of the disaster, then had to push through politics to have his say, even seeing his most important thoughts relegated to an appendix behind the "official" report. [2]
The appearance and promotion of Haystack (and other similar technologies, including TOR - which Bram Cohen discusses in another message) must be carefully scrutinized. Normal people are not able to give proper informed consent. The risks of a cryptographic system in operation in a hostile environment are not just in the risks of a single instance of using the system, but bound up in infinite externalities - where did the messages go, how can the system be compromised, what external systems must work perfectly to provide the intended protection, what mistakes can a person make, and on and on...
I now think that those who worry about the risks in privacy-providing systems, who test and publish risk analyses of products that are intended to protect troublemakers and thought criminals - are more important than those who build or claim to build those systems. It's time to stop publishing opinions and hopes for the repressed, time to break this "problem" down into its irreducible technological, political, and human components. Solve those that can be solved - and then soberly determine whether the remaining unsolvable pieces leave us with a system that is sufficiently secure, or with a system having problems that are so far outside the control of technology that they cannot be solved by code, despite good intentions.
Constructive suggestions: publish your methods, algorithms, and system architecture. Explain whatever analysis you have for its known risks and also its protections against various attacks. Listen to those who read and critique the system. If problems can be fixed, fix them. If problems cannot be fixed, change the design, or change the intent of the product, or stop if the problems cannot be overcome. Repeat until sufficiently safe, or until it's determined that the problem isn't solvable "right now."
kind regards and thanks for your time,
- jim
====
[1] pun intended
[2] http://www.fotuva.org/feynman/challenger-appendix.html
More information about the liberationtech
mailing list