[liberationtech] Haystack and informed consent—A legal/philosophical response to Jacob's concenrs

Babak Siavoshy babak at censorshipresearch.org
Sat Sep 11 07:45:15 PDT 2010


Haystack and informed consent—A legal/philosophical response to Jacob

First, I would like to introduce myself: My name is Babak Siavoshy, I  
am a director at the CRC, and a philosopher and lawyer by training and  
profession.  I do not specialize in technical issues, and am therefore  
not well versed in the common language of this board; I apologize in  
advance for my lack of expertise.  And I also want to apologize for  
the somewhat legal/philosophical and academic discussion of the issues  
to which I intend to subject the list.

I appreciate Jacob initiating this debate—as I have told my colleagues  
and friends in recent days, I believe that the increasingly open  
discussion about Haystack and anti-censorship tools generally has been  
a positive development that will make this community stronger in the  
long run.  I also want to note the positive and polite tone with which  
Jacob has initiated this conversation, despite the increasingly  
personal nature some of the attacks on our organization have taken as  
of late.  I urge each of you to follow his lead.

Yesterday Jacob issued an ultimatum: cease all testing of the BETA  
version of Haystack or he will publish a document airing security  
flaws in the program.  While I have not spoken with Jacob, I am told  
both by Austin and others that his intentions are genuine, and that  
this type of practice is common in the security industry.

Dan Colascione, our technical director and the chief architect of  
Haystack, is meeting with Jacob in Seattle in order to discuss some of  
the technical issues.  Dan will also join this list serve in the near  
future and respond, in details that neither Austin nor I could  
present, to the technical concerns that have been raised.  Please hold  
your technical questions until then.

But the concerns raised by Jacob are not, at their heart, technical  
concerns—they are moral, ethical, and political concerns—and I think  
that I am well positioned to speak to them.  While I am not an expert  
on Iran, I am intimately familiar with the socio-political situation  
in the country, and I know the risks and consequences of dissent both  
through personal experience and the experiences of family members and  
close friends.  As to ethical issues, those on this list who have  
training in the law will know that ethics is a central part of this  
profession, whose members must regularly balance duties—sometimes  
conflicting—owed to their clients, the courts, and to the integrity of  
the legal system.

I address the moral, ethical, and political issues raised by Jacob here:

THE THRESHOLD QUESTION: WHAT LEVEL OF RISK IS ACCEPTABLE, AND UNDER  
WHAT CIRCUMSTANCES?

Assume the following:

For the purpose of this discussion, I would like to assume certain  
facts arguendo:

1.) Assume, for the purpose of this discussion, that a Haystack BETA  
tester undergoes a certain risk every time he or she tests the program  
in Iran.

2.) Assume, for the purpose of this discussion, that we can quantify  
that risk as follows: using Haystack BETA is safer than using an  
ordinary proxy, but, say, less safe than using a program like Tor.  It  
is somewhere in that range.

(Again, I do not know the extent to which these statements are true. I  
am no technical expert and that is not the point of this exercise.)

The threshold question and Jacob and my differing answers to that  
question:

The only question that remains—at least, the moral, ethical, political  
question that remains—is:  assuming premises #1 and #2, above, under  
what circumstances is it proper to distribute the BETA version of  
Haystack to testers in Iran?

Jacob’s answer: “under no circumstances may Haystack be distributed if  
#1 and #2 are true.  The acceptable level of risk is determined by  
expert third parties, and not by the tester himself.”  This is my  
anecdotal understanding of Jacob’s position, both from the message  
Jacob previously sent, and from Austin’s description of his  
conversation with Jacob.  (If I misunderstood Jacob’s position, I  
apologize in advance—do not ascribe my reading of that position to  
Jacob without checking with him first.)

My answer: “The level of acceptable risk to a BETA tester of Haystack  
is determined by the tester’s informed consent.”  It is the tester,  
and not a third party, who decides the acceptable risk threshold.  If  
the risks, and the degree to which the risks are unknown, are properly  
disclosed in easy to understand, non-technical terms, then the  
tester’s act of consenting to the risk ends the moral inquiry.

I think Jacob’s answer (if I understand it correctly) is incorrect as  
an intuitive moral matter, as an ethical principle, and as a statement  
made in light of the present political context.  I think my answer is  
correct under all of those criteria, though I think the CRC needs to  
adjust its testing methodology to conform to the model I am proposing.

I will explain my reasons below.

The applicable ethical rule and its exceptions:

The correct rule: I think that the correct ethical rule in this  
context is that the level of acceptable risk for any activity depends,  
primarily, on two factors: disclosure and consent.  If the risks are  
disclosed, and the subject can and does consent to those risks, then  
the risk is acceptable.

This is not a novel proposition.  It has its roots in John Stuart  
Mill’s harm principle.  It is based in principles of moral sovereignty  
and human autonomy and dignity.

We see it expressed everywhere in the law.  In the context of medical  
care, it is called “informed consent.”  In the context of legal  
representation, it is called “consent to potential conflicts of  
interests.”  In the context of the torts law it is often called  
“assumption of risk”; and in the context of product liability it is  
instantiated in the law precluding liability whenever the manufacturer  
properly warns the user of the dangers of a product.  In each case,  
the basic, underlying ethical principle is the same—a risk or conflict  
is acceptable to the extent that it is understood by the risk-taker  
and therefore consented to by that risk taker.

Exceptions to the rule—activities that should be banned regardless of  
disclosure and consent: There are, of course, some activities which,  
in conventional morality and practice, are banned categorically  
regardless of disclosure and consent.  I can think of two classes of  
activities.

Where consent is seen to be impossible:  In the first category, we ban  
an activity because we do not believe one or more of the parties can  
properly consent to the activity.  The ban on the use of certain drugs  
is, at least partly grounded in the fact that the drugs are addictive,  
and thus thought to reduce the user’s ability to consent.  And, to use  
a quintessential example from the law school classroom—ostensibly  
consensual sex between a minor and an adult is banned in many  
jurisdictions on the theory that minors cannot, as a group, properly  
consent to sex with an adult.

When the activity carries insurmountable negative externalities: The  
other category of activities that are banned in society despite being  
engaged in with full disclosure and consent are activities that are  
seen to have negative externalities.  This is another part of the  
rationale for banning drugs.  Society sees fit to prohibit the use of  
certain drugs, even with disclosure and consent, because society  
believes that drug use has negative effects that go beyond the user’s  
effect on his own body.  The same type of rationale (rightly or  
wrongly) arguably forms the theoretical underpinning for banning  
prostitution.  The consenting actor gets to make his own risk  
assessment, but he does not get to make risk assessments for third  
parties/society harmed by his conduct.  Those third parties/society  
must be given a chance to make their own risk assessments.

Application to the present case:

The application of this ethical principle to the question in this case  
is fairly straightforward.  Whether it is proper to distribute  
Haystack BETA to a tester depends only on whether we have properly  
disclosed the risks of using the program and whether the tester has  
properly consented to those risks.

I want to anticipate one objection to my conclusion: the objection is  
that the average tester in Iran can never properly ascertain the risk  
of a test, because they are not familiar with the technical concepts  
involved.  This objection is essentially modeled on the “impossible  
consent” exceptions to the general rule that I discussed above.

This objection fails.  In our society and in conventional  
understanding, the lack of technical expertise does not remove the  
subject’s ability to morally consent to an activity or procedure.

The quintessential example is medical care, and the doctrine of  
informed consent in the medical field.  A patient can consent to a  
medical procedure even if the patient does not understand the  
technical details of the procedure.  A patient can—and patients  
do—properly consent even to a medical procedure whose risks are  
unknown.  The patient simply has to be informed the extent to which  
the risks are unknown.

This general principle applies in the same way in the context of anti- 
censorship software.  The risks must be disclosed to the extent  
possible and in a language ascertainable to a non-expert; and the  
extent to which the risks are unknown must also be disclosed.  After a  
proper disclosure, it is consent, and consent alone (i.e., not Jacob,  
not Evgeny, and not anyone else) that determines whether distribution  
is proper.

[The second possible objection is that distribution of anticensorship  
software would have insurmountable negative externalities and must  
therefore be prohibited.  I don’t think anyone has made this  
objection, so I will not address it here.]

The conclusion: if Haystack’s imposes a risk on the user which is  
somewhere between an ordinary proxy and Tor (again, I have no idea if  
this is true—these are merely assumptions I made at the beginning of  
the argument) then testers who understand this risk (and the  
description of the level of risk should be much, much more specific  
than what I gave earlier) should be permitted to use it without  
interference from third parties.

Political considerations:

Several political considerations further support my model over the  
model I am attributing to Jacob.

First, Jacob’s argument incorrectly treats Iranian testers as passive  
subjects.  It is, in other words, premised on a patronizing moral  
principle: we should decide what level of risk is acceptable to  
others.  As an Iranian, and a person familiar with the situation and  
the people affected by the Iranian government’s brutal repression, I  
cannot agree with this statement.  And as an American, who like most  
Americans has at least a sliver (probably more) of libertarian blood  
flowing in my veins, I have a gut reaction against this type of  
proposition.

I will only say that Iranian testers know the risks of defying the  
government.  They live with that risk every day.  They see their  
brothers and sisters beat in the street, and harassed by police  
officers; they see their government pass laws that make it a crime to  
do things others take for granted; they hear stories of officers  
breaking into dorm rooms and confiscating and destroying computers;  
and they know the risk of using a BETA test of Haystack, a final  
version of Tor or of Freegate.

Second, the type of patronizing argument envisioned here—which states  
that a third party must determine the acceptable level of risk Iranian  
testers are willing to take—is generally inconsistent with the  
philosophy underlying the anti-censorship movement.  Anti-censorship  
is premised on the idea that we do not make decisions for others.  It  
is premised on the idea of providing information and providing people  
with the opportunity for in self-determination.  A categorical rule  
banning certain risks is not only inconsistent with basic moral and  
ethical principles, it is inconsistent with the basic philosophical  
framework underlying the anti-censorship movement.

What this means for us—proposed compromise:

I have done what I think is a thorough analysis of the ethical issues  
involved.  There is room for an open debate, and those who know me  
know that I am always willing to change my mind when I see that I am  
wrong.  At this point, however, I think it is fairly clear that the  
proper model for testing Haystack is the model of “informed consent”.

While the CRC generally has this model in mind in its testing  
procedures, I think that we need to have a thorough discussion about  
specific steps that we can take within the organization to adhere to  
fully adhere to this model

There are four changes that I propose we make in order to fit this  
informed consent model.

1.) Disclaimer.  First, I propose that we include a disclaimer of the  
risks of Haystack—including the unknown risks of Haystack—with every  
BETA copy of Haystack that is distributed.  The disclaimer would be  
presented in the user’s language, and would give the user another  
opportunity to make the risk calculus before using the program.   
Second, I propose that we be more clear in our materials—absolutely,  
unequivocally clear—that Haystack is in the BETA stage of testing.

2.) Distribution.  We do inform all of our testers that the program is  
risky.  But we have less control over others who obtain the program  
from our testers.  However, Haystack’s functionality allows us to  
revoke access to servers from individual users.  As our second  
measure, I propose that we take the necessary steps, both through  
directives and any technical means available, to ensure that each  
Haystack BETA tester does not distribute the program to others.

3.) Volume.  The more copies of Haystack BETA are out there, the more  
likely it is that certain users will use the program without informed  
consent.  I propose that we restrict our BETA testing pool to a small  
number—less than 20 users.  Again, my understanding is that there are  
technical means to enforce the limits on the testers.  Before any  
increase in our testing pool, we will take measures to reduce the  
risks posed by the program to users.

4) Risk assessment and other measures.  Jacob proposed a number of  
other measures that are technical in nature and may reduce the risks  
allegedly posed by Haystack BETA, or make it easier to disclose some  
of those risks.  To the extent that this is true, and consistent with  
the principles I discussed above, we should take technical measures  
consistent with some of Jacob’s and others’ suggestions.

I think these are reasonable compromises, they will make Haystack BETA  
safer, and I hope that they assuage Jacob’s concerns.

And now to Jacob’s ultimatum—that he will release confidential  
information about the BETA version of a program that is being  
distributed to a few individuals.  I’ve discussed and (I think)  
addressed the moral, ethical, and political arguments underlying  
Jacob’s proposal above; and I think that it is clear from my  
discussion that I take Jacob’s concerns seriously and that I respect  
his views and the views of the tech community.

As I mentioned, I think Jacob’s conduct is well-intentioned, but based  
in misguided and patronizing moral and ethical principles.  As an  
Iranian, I have a great deal of experience with those who sincerely  
believe that they can make moral decisions for a community.  I do not  
particularly appreciate this posture.  So I say this to Jacob, and I  
say it as earnestly and gently as I can, with the aim of encouraging  
further debate: a well-intentioned, misguided bully is still a bully.  
(I do not mean to insinuate that there is any moral equivalence  
between Jacob’s ultimatum and the atrocities committed in Iran: there  
is not; but it is an analogy that makes a point).

Whether we will abide by Jacob’s ultimatum or not will depend on the  
CRC’s continued discussions regarding the technical issues that Jacob  
has raised and will raise with Dan and Austin in the upcoming days.  I  
also believe, from my conversations with Dan and Austin, that we will  
hereto be more open to discussing certain technical aspects of the  
program with the tech community.  I think the measures I have proposed  
will, in the long run, contribute to the safety of Iranian testers of  
Haystack, and perhaps of other anti-censorship software as well.

These are all positive developments, and I thank each of you for your  
continued contribution to the debate.

Babak Siavoshy
Censorship Research Center





On Sep 10, 2010, at 11:09 PM, Jacob Appelbaum wrote:

> Hi,
>
> As some of you know, I've been researching Haystack on a technical
> level. I've learned a great deal of information about it and certainly
> enough to make some definitive statements. Most of my research was
> confirmed today and after speaking with Austin Heap he agreed with  
> me on
> many of my points. I was relieved that we were able to have such a
> productive conversation and the outcome is probably the safest  
> possible
> at the moment.
>
> I think that Austin has his heart in the right place and today he
> claimed to have taken some actions that demonstrate this to me.
>
> Specifically, Austin claimed and agreed that I could state the  
> following
> on this mailing list:
>
> 	Haystack has been turned off as of ~19:00 PST.
>
> I have no independent confirmation on the truth of this claim but I
> believe Austin was being sincere with me. I invite Austin, Babek, and
> Daniel to confirm this statement in public.
>
> Furthermore, Austin stated that Haystack will not be run again until
> there is a solid published threat model, a solid peer reviewed design,
> and a real security review of the Haystack implementation. He has also
> agreed to review the claims made on his websites. I imagine that he  
> will
> have to check in with others before following through with those  
> claims
> but I believe he has made them in good faith.
>
> He additionally agreed that Haystack will not use human testers moving
> forward because of the risks involved.
>
> He has agreed to engage with this mailing list to address the concerns
> voiced here as well as those voiced elsewhere.
>
> In the interest of not putting people in Iran at serious risk, I am
> going to remain silent for the time being on the issues I have
> discovered. I reserve the right to change my mind if I believe that
> people are being put into harms way.
>
> I believe that we only benefit from more circumvention, anonymity,
> privacy and security tools but only if they're safely designed, safely
> implemented, and honestly reviewed in the open.
>
> A rich ecosystem of safe to use tools is important for the world  
> that we
> all want to create.
>
> I look forward to Austin and Daniel engaging with this mailing list  
> and
> with the rest of the circumvention community openly.
>
> Sincerely,
> Jacob
>

--
Babak Siavoshy
Managing Director
Censorship Research Center
babak at censorshipresearch.org
510-684-7177





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/liberationtech/attachments/20100911/141742bd/attachment.html>


More information about the liberationtech mailing list