[liberationtech] Haystack and informed consent—A legal/philosophical response to Jacob's concenrs
Babak Siavoshy
babak at censorshipresearch.org
Sat Sep 11 13:34:38 PDT 2010
Lib Tech -- ,
1.) As both Joshua Cohen and Evgeny Morozov pointed out, in an
interview with Roger Cohen -- my only interview on the subject -- I
made a cardinal mistake by talking about technology when I know
nothing of it; and it came in the only interview I ever did on the
subject. That was a mistake.
2.) While it does not change any relevant part of this debate, it is
worth noting that I was misquoted -- or at least, under-quoted -- in
the article. All of the parts where I qualified my statements -- "If
we are able to complete the project"; "I am not a tech expert, but";
"if Haystack is completed"; and so on were omitted from my interview
with Mr. Roger Cohen. I brought this fact up with a friend who is a
journalist. He told me to contact Mr. Cohen. At the time, I didn't
see the point -- exporting Haystack was illegal, and the article had
already been printed. Now I see that was a mistake.
3.) Again, while it doesn't change any relevant part of this debate, I
don't know that the claim (the non-hyperbolic version of it) about
encryption is false. What I do know is that I was not qualified to
make it; and that I stated my lack of qualifications in the interview.
4.) Haystack's testers were informed that there are significant risks
involved with the program, and that they should know that they are
playing with fire. Clearly, given the media attention, we can and
should have done more.
5.) One fact that is lost in all of this is that we have had only a
small number of tests in Iran, and with testers who generally have
only one or two degrees of separation with us. Since we have received
the license, the number of individual testers has been between 10 and
20. The number of people testing the program at any one time has been
between 3 and 4. This has enabled us to be in closer contact (but not
direct contact) with the testers, and appraise them of the risks
involved.
6.) I take it that all parties agree that the proper paradigm is
informed consent--rather than a straw poll of the Stanford Lib Tech
group--but that the principle disagreement is with whether we have
made the proper disclosures. Lets concede this is true. How can we
fix it? Please give us constructive suggestions.
It pains me, more than you can imagine, to be accused of carelessness
with the lives of other individuals. Thankfully, nothing of the sort
has happened to date.
I look forward to your further comments.
On Sep 11, 2010, at 3:06 PM, Evgeny Morozov wrote:
> Disclaimer: I'm neither philosopher nor computer expert. On the
> bright side, I know how to search the Internet.
>
> Babak Siavoshy in March 2010: “Double-click on Haystack and you
> browse the Internet anonymously and safely. It’s encrypted at such a
> level it would take thousands of years to figure out what you’re
> saying. It’s a potent open-society tool. It’s just a matter of
> getting it to Iran — and that’s still illegal."
>
> Babak Siavoshy in Sept 2010: see below
>
> My non-philosopher's common sense dictates that Haystack's testers
> a) were told to specifically disregard whatever Haystack's
> management were telling to the media b) were misled into believing
> that Haystack was more secure than it really was.
>
> If it is a), I would like Babak to publicly acknowledge this. If it
> is b), I don't see how the harm principle applies.
>
> Best,
>
> Evgeny
>
> On Sat, Sep 11, 2010 at 7:45 AM, Babak Siavoshy <babak at censorshipresearch.org
> > wrote:
> Haystack and informed consent—A legal/philosophical response to Jacob
>
> First, I would like to introduce myself: My name is Babak Siavoshy,
> I am a director at the CRC, and a philosopher and lawyer by training
> and profession. I do not specialize in technical issues, and am
> therefore not well versed in the common language of this board; I
> apologize in advance for my lack of expertise. And I also want to
> apologize for the somewhat legal/philosophical and academic
> discussion of the issues to which I intend to subject the list.
>
> I appreciate Jacob initiating this debate—as I have told my
> colleagues and friends in recent days, I believe that the
> increasingly open discussion about Haystack and anti-censorship
> tools generally has been a positive development that will make this
> community stronger in the long run. I also want to note the
> positive and polite tone with which Jacob has initiated this
> conversation, despite the increasingly personal nature some of the
> attacks on our organization have taken as of late. I urge each of
> you to follow his lead.
>
> Yesterday Jacob issued an ultimatum: cease all testing of the BETA
> version of Haystack or he will publish a document airing security
> flaws in the program. While I have not spoken with Jacob, I am told
> both by Austin and others that his intentions are genuine, and that
> this type of practice is common in the security industry.
>
> Dan Colascione, our technical director and the chief architect of
> Haystack, is meeting with Jacob in Seattle in order to discuss some
> of the technical issues. Dan will also join this list serve in the
> near future and respond, in details that neither Austin nor I could
> present, to the technical concerns that have been raised. Please
> hold your technical questions until then.
>
> But the concerns raised by Jacob are not, at their heart, technical
> concerns—they are moral, ethical, and political concerns—and I think
> that I am well positioned to speak to them. While I am not an
> expert on Iran, I am intimately familiar with the socio-political
> situation in the country, and I know the risks and consequences of
> dissent both through personal experience and the experiences of
> family members and close friends. As to ethical issues, those on
> this list who have training in the law will know that ethics is a
> central part of this profession, whose members must regularly
> balance duties—sometimes conflicting—owed to their clients, the
> courts, and to the integrity of the legal system.
>
> I address the moral, ethical, and political issues raised by Jacob
> here:
>
> THE THRESHOLD QUESTION: WHAT LEVEL OF RISK IS ACCEPTABLE, AND UNDER
> WHAT CIRCUMSTANCES?
>
> Assume the following:
>
> For the purpose of this discussion, I would like to assume certain
> facts arguendo:
>
> 1.) Assume, for the purpose of this discussion, that a Haystack BETA
> tester undergoes a certain risk every time he or she tests the
> program in Iran.
>
> 2.) Assume, for the purpose of this discussion, that we can quantify
> that risk as follows: using Haystack BETA is safer than using an
> ordinary proxy, but, say, less safe than using a program like Tor.
> It is somewhere in that range.
>
> (Again, I do not know the extent to which these statements are true.
> I am no technical expert and that is not the point of this exercise.)
>
> The threshold question and Jacob and my differing answers to that
> question:
>
> The only question that remains—at least, the moral, ethical,
> political question that remains—is: assuming premises #1 and #2,
> above, under what circumstances is it proper to distribute the BETA
> version of Haystack to testers in Iran?
>
> Jacob’s answer: “under no circumstances may Haystack be distributed
> if #1 and #2 are true. The acceptable level of risk is determined
> by expert third parties, and not by the tester himself.” This is my
> anecdotal understanding of Jacob’s position, both from the message
> Jacob previously sent, and from Austin’s description of his
> conversation with Jacob. (If I misunderstood Jacob’s position, I
> apologize in advance—do not ascribe my reading of that position to
> Jacob without checking with him first.)
>
> My answer: “The level of acceptable risk to a BETA tester of
> Haystack is determined by the tester’s informed consent.” It is the
> tester, and not a third party, who decides the acceptable risk
> threshold. If the risks, and the degree to which the risks are
> unknown, are properly disclosed in easy to understand, non-technical
> terms, then the tester’s act of consenting to the risk ends the
> moral inquiry.
>
> I think Jacob’s answer (if I understand it correctly) is incorrect
> as an intuitive moral matter, as an ethical principle, and as a
> statement made in light of the present political context. I think
> my answer is correct under all of those criteria, though I think the
> CRC needs to adjust its testing methodology to conform to the model
> I am proposing.
>
> I will explain my reasons below.
>
> The applicable ethical rule and its exceptions:
>
> The correct rule: I think that the correct ethical rule in this
> context is that the level of acceptable risk for any activity
> depends, primarily, on two factors: disclosure and consent. If the
> risks are disclosed, and the subject can and does consent to those
> risks, then the risk is acceptable.
>
> This is not a novel proposition. It has its roots in John Stuart
> Mill’s harm principle. It is based in principles of moral
> sovereignty and human autonomy and dignity.
>
> We see it expressed everywhere in the law. In the context of
> medical care, it is called “informed consent.” In the context of
> legal representation, it is called “consent to potential conflicts
> of interests.” In the context of the torts law it is often called
> “assumption of risk”; and in the context of product liability it is
> instantiated in the law precluding liability whenever the
> manufacturer properly warns the user of the dangers of a product.
> In each case, the basic, underlying ethical principle is the same—a
> risk or conflict is acceptable to the extent that it is understood
> by the risk-taker and therefore consented to by that risk taker.
>
> Exceptions to the rule—activities that should be banned regardless
> of disclosure and consent: There are, of course, some activities
> which, in conventional morality and practice, are banned
> categorically regardless of disclosure and consent. I can think of
> two classes of activities.
>
> Where consent is seen to be impossible: In the first category, we
> ban an activity because we do not believe one or more of the parties
> can properly consent to the activity. The ban on the use of certain
> drugs is, at least partly grounded in the fact that the drugs are
> addictive, and thus thought to reduce the user’s ability to
> consent. And, to use a quintessential example from the law school
> classroom—ostensibly consensual sex between a minor and an adult is
> banned in many jurisdictions on the theory that minors cannot, as a
> group, properly consent to sex with an adult.
>
> When the activity carries insurmountable negative externalities: The
> other category of activities that are banned in society despite
> being engaged in with full disclosure and consent are activities
> that are seen to have negative externalities. This is another part
> of the rationale for banning drugs. Society sees fit to prohibit
> the use of certain drugs, even with disclosure and consent, because
> society believes that drug use has negative effects that go beyond
> the user’s effect on his own body. The same type of rationale
> (rightly or wrongly) arguably forms the theoretical underpinning for
> banning prostitution. The consenting actor gets to make his own
> risk assessment, but he does not get to make risk assessments for
> third parties/society harmed by his conduct. Those third parties/
> society must be given a chance to make their own risk assessments.
>
> Application to the present case:
>
> The application of this ethical principle to the question in this
> case is fairly straightforward. Whether it is proper to distribute
> Haystack BETA to a tester depends only on whether we have properly
> disclosed the risks of using the program and whether the tester has
> properly consented to those risks.
>
> I want to anticipate one objection to my conclusion: the objection
> is that the average tester in Iran can never properly ascertain the
> risk of a test, because they are not familiar with the technical
> concepts involved. This objection is essentially modeled on the
> “impossible consent” exceptions to the general rule that I discussed
> above.
>
> This objection fails. In our society and in conventional
> understanding, the lack of technical expertise does not remove the
> subject’s ability to morally consent to an activity or procedure.
>
> The quintessential example is medical care, and the doctrine of
> informed consent in the medical field. A patient can consent to a
> medical procedure even if the patient does not understand the
> technical details of the procedure. A patient can—and patients do—
> properly consent even to a medical procedure whose risks are
> unknown. The patient simply has to be informed the extent to which
> the risks are unknown.
>
> This general principle applies in the same way in the context of
> anti-censorship software. The risks must be disclosed to the extent
> possible and in a language ascertainable to a non-expert; and the
> extent to which the risks are unknown must also be disclosed. After
> a proper disclosure, it is consent, and consent alone (i.e., not
> Jacob, not Evgeny, and not anyone else) that determines whether
> distribution is proper.
>
> [The second possible objection is that distribution of
> anticensorship software would have insurmountable negative
> externalities and must therefore be prohibited. I don’t think
> anyone has made this objection, so I will not address it here.]
>
> The conclusion: if Haystack’s imposes a risk on the user which is
> somewhere between an ordinary proxy and Tor (again, I have no idea
> if this is true—these are merely assumptions I made at the beginning
> of the argument) then testers who understand this risk (and the
> description of the level of risk should be much, much more specific
> than what I gave earlier) should be permitted to use it without
> interference from third parties.
>
> Political considerations:
>
> Several political considerations further support my model over the
> model I am attributing to Jacob.
>
> First, Jacob’s argument incorrectly treats Iranian testers as
> passive subjects. It is, in other words, premised on a patronizing
> moral principle: we should decide what level of risk is acceptable
> to others. As an Iranian, and a person familiar with the situation
> and the people affected by the Iranian government’s brutal
> repression, I cannot agree with this statement. And as an American,
> who like most Americans has at least a sliver (probably more) of
> libertarian blood flowing in my veins, I have a gut reaction against
> this type of proposition.
>
> I will only say that Iranian testers know the risks of defying the
> government. They live with that risk every day. They see their
> brothers and sisters beat in the street, and harassed by police
> officers; they see their government pass laws that make it a crime
> to do things others take for granted; they hear stories of officers
> breaking into dorm rooms and confiscating and destroying computers;
> and they know the risk of using a BETA test of Haystack, a final
> version of Tor or of Freegate.
>
> Second, the type of patronizing argument envisioned here—which
> states that a third party must determine the acceptable level of
> risk Iranian testers are willing to take—is generally inconsistent
> with the philosophy underlying the anti-censorship movement. Anti-
> censorship is premised on the idea that we do not make decisions for
> others. It is premised on the idea of providing information and
> providing people with the opportunity for in self-determination. A
> categorical rule banning certain risks is not only inconsistent with
> basic moral and ethical principles, it is inconsistent with the
> basic philosophical framework underlying the anti-censorship movement.
>
> What this means for us—proposed compromise:
>
> I have done what I think is a thorough analysis of the ethical
> issues involved. There is room for an open debate, and those who
> know me know that I am always willing to change my mind when I see
> that I am wrong. At this point, however, I think it is fairly clear
> that the proper model for testing Haystack is the model of “informed
> consent”.
>
> While the CRC generally has this model in mind in its testing
> procedures, I think that we need to have a thorough discussion about
> specific steps that we can take within the organization to adhere to
> fully adhere to this model
>
> There are four changes that I propose we make in order to fit this
> informed consent model.
>
> 1.) Disclaimer. First, I propose that we include a disclaimer of
> the risks of Haystack—including the unknown risks of Haystack—with
> every BETA copy of Haystack that is distributed. The disclaimer
> would be presented in the user’s language, and would give the user
> another opportunity to make the risk calculus before using the
> program. Second, I propose that we be more clear in our materials—
> absolutely, unequivocally clear—that Haystack is in the BETA stage
> of testing.
>
> 2.) Distribution. We do inform all of our testers that the program
> is risky. But we have less control over others who obtain the
> program from our testers. However, Haystack’s functionality allows
> us to revoke access to servers from individual users. As our second
> measure, I propose that we take the necessary steps, both through
> directives and any technical means available, to ensure that each
> Haystack BETA tester does not distribute the program to others.
>
> 3.) Volume. The more copies of Haystack BETA are out there, the
> more likely it is that certain users will use the program without
> informed consent. I propose that we restrict our BETA testing pool
> to a small number—less than 20 users. Again, my understanding is
> that there are technical means to enforce the limits on the
> testers. Before any increase in our testing pool, we will take
> measures to reduce the risks posed by the program to users.
>
> 4) Risk assessment and other measures. Jacob proposed a number of
> other measures that are technical in nature and may reduce the risks
> allegedly posed by Haystack BETA, or make it easier to disclose some
> of those risks. To the extent that this is true, and consistent
> with the principles I discussed above, we should take technical
> measures consistent with some of Jacob’s and others’ suggestions.
>
> I think these are reasonable compromises, they will make Haystack
> BETA safer, and I hope that they assuage Jacob’s concerns.
>
> And now to Jacob’s ultimatum—that he will release confidential
> information about the BETA version of a program that is being
> distributed to a few individuals. I’ve discussed and (I think)
> addressed the moral, ethical, and political arguments underlying
> Jacob’s proposal above; and I think that it is clear from my
> discussion that I take Jacob’s concerns seriously and that I respect
> his views and the views of the tech community.
>
> As I mentioned, I think Jacob’s conduct is well-intentioned, but
> based in misguided and patronizing moral and ethical principles. As
> an Iranian, I have a great deal of experience with those who
> sincerely believe that they can make moral decisions for a
> community. I do not particularly appreciate this posture. So I say
> this to Jacob, and I say it as earnestly and gently as I can, with
> the aim of encouraging further debate: a well-intentioned, misguided
> bully is still a bully. (I do not mean to insinuate that there is
> any moral equivalence between Jacob’s ultimatum and the atrocities
> committed in Iran: there is not; but it is an analogy that makes a
> point).
>
> Whether we will abide by Jacob’s ultimatum or not will depend on the
> CRC’s continued discussions regarding the technical issues that
> Jacob has raised and will raise with Dan and Austin in the upcoming
> days. I also believe, from my conversations with Dan and Austin,
> that we will hereto be more open to discussing certain technical
> aspects of the program with the tech community. I think the
> measures I have proposed will, in the long run, contribute to the
> safety of Iranian testers of Haystack, and perhaps of other anti-
> censorship software as well.
>
> These are all positive developments, and I thank each of you for
> your continued contribution to the debate.
>
> Babak Siavoshy
> Censorship Research Center
>
>
>
>
>
> On Sep 10, 2010, at 11:09 PM, Jacob Appelbaum wrote:
>
>> Hi,
>>
>> As some of you know, I've been researching Haystack on a technical
>> level. I've learned a great deal of information about it and
>> certainly
>> enough to make some definitive statements. Most of my research was
>> confirmed today and after speaking with Austin Heap he agreed with
>> me on
>> many of my points. I was relieved that we were able to have such a
>> productive conversation and the outcome is probably the safest
>> possible
>> at the moment.
>>
>> I think that Austin has his heart in the right place and today he
>> claimed to have taken some actions that demonstrate this to me.
>>
>> Specifically, Austin claimed and agreed that I could state the
>> following
>> on this mailing list:
>>
>> Haystack has been turned off as of ~19:00 PST.
>>
>> I have no independent confirmation on the truth of this claim but I
>> believe Austin was being sincere with me. I invite Austin, Babek, and
>> Daniel to confirm this statement in public.
>>
>> Furthermore, Austin stated that Haystack will not be run again until
>> there is a solid published threat model, a solid peer reviewed
>> design,
>> and a real security review of the Haystack implementation. He has
>> also
>> agreed to review the claims made on his websites. I imagine that he
>> will
>> have to check in with others before following through with those
>> claims
>> but I believe he has made them in good faith.
>>
>> He additionally agreed that Haystack will not use human testers
>> moving
>> forward because of the risks involved.
>>
>> He has agreed to engage with this mailing list to address the
>> concerns
>> voiced here as well as those voiced elsewhere.
>>
>> In the interest of not putting people in Iran at serious risk, I am
>> going to remain silent for the time being on the issues I have
>> discovered. I reserve the right to change my mind if I believe that
>> people are being put into harms way.
>>
>> I believe that we only benefit from more circumvention, anonymity,
>> privacy and security tools but only if they're safely designed,
>> safely
>> implemented, and honestly reviewed in the open.
>>
>> A rich ecosystem of safe to use tools is important for the world
>> that we
>> all want to create.
>>
>> I look forward to Austin and Daniel engaging with this mailing list
>> and
>> with the rest of the circumvention community openly.
>>
>> Sincerely,
>> Jacob
>>
>
> --
> Babak Siavoshy
> Managing Director
> Censorship Research Center
> babak at censorshipresearch.org
> 510-684-7177
>
>
>
>
>
>
> _______________________________________________
> liberationtech mailing list
> liberationtech at lists.stanford.edu
>
> Should you need to change your subscription options, please go to:
>
> https://mailman.stanford.edu/mailman/listinfo/liberationtech
>
>
--
Babak Siavoshy
Managing Director
Censorship Research Center
babak at censorshipresearch.org
510-684-7177
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/liberationtech/attachments/20100911/846cbf08/attachment.html>
More information about the liberationtech
mailing list