[liberationtech] Haystack and informed consent—A legal/philosophical response to Jacob's concenrs

Jacob Appelbaum jacob at appelbaum.net
Sat Sep 11 20:16:17 PDT 2010


Hello Babak,

On 09/11/2010 07:45 AM, Babak Siavoshy wrote:
> Haystack and informed consent—A legal/philosophical response to Jacob
> 
> First, I would like to introduce myself: My name is Babak Siavoshy, I am
> a director at the CRC, and a philosopher and lawyer by training and
> profession.  I do not specialize in technical issues, and am therefore
> not well versed in the common language of this board; I apologize in
> advance for my lack of expertise.  And I also want to apologize for the
> somewhat legal/philosophical and academic discussion of the issues to
> which I intend to subject the list.
> 

Noted.

> I appreciate Jacob initiating this debate—as I have told my colleagues
> and friends in recent days, I believe that the increasingly open
> discussion about Haystack and anti-censorship tools generally has been a
> positive development that will make this community stronger in the long
> run.  I also want to note the positive and polite tone with which Jacob
> has initiated this conversation, despite the increasingly personal
> nature some of the attacks on our organization have taken as of late.  I
> urge each of you to follow his lead.
> 

Thank you for the kind words Babak.

> Yesterday Jacob issued an ultimatum: cease all testing of the BETA
> version of Haystack or he will publish a document airing security flaws
> in the program.  While I have not spoken with Jacob, I am told both by
> Austin and others that his intentions are genuine, and that this type of
> practice is common in the security industry.

A point of minor correction - Austin turned off Haystack, he claims,
when Danny O'Brien contacted him early Friday afternoon. This was more
than a full hour before I spoke with Austin Heap on the telephone. I
issued no ultimatum through Danny. As I understand the conversation,
Danny expressed to Austin that I wanted to discuss some security
concerns and Austin claims that he turned down the Haystack network.

Thus, I did not issue the above stated ultimatum to Austin. You were
misinformed. As a result of your flawed understanding, the rest of your
argument falls apart.

> 
> Dan Colascione, our technical director and the chief architect of
> Haystack, is meeting with Jacob in Seattle in order to discuss some of
> the technical issues.  Dan will also join this list serve in the near
> future and respond, in details that neither Austin nor I could present,
> to the technical concerns that have been raised.  Please hold your
> technical questions until then.
> 

I have a plan to speak with Daniel on Monday.

I believe that you and Austin have around twelve months of queued
questions to answer. Please note that while I think Daniel will have a
great deal to say about technical issues, I already have that information.

I do look forward to your replies in a timely manner. Specifically, I
believe that you and Austin have put yourself in an awkward position but
I do look forward to understanding your perspectives on how this
happened. I assume good faith - mistakes happen and people who mean well
are not always the people who know best. I am sure that I have also made
mistakes and have come to some misunderstandings. Please do feel free to
correct me when you see such a misunderstanding.

> But the concerns raised by Jacob are not, at their heart, technical
> concerns—they are moral, ethical, and political concerns—and I think
> that I am well positioned to speak to them.  While I am not an expert on
> Iran, I am intimately familiar with the socio-political situation in the
> country, and I know the risks and consequences of dissent both through
> personal experience and the experiences of family members and close
> friends.  As to ethical issues, those on this list who have training in
> the law will know that ethics is a central part of this profession,
> whose members must regularly balance duties—sometimes conflicting—owed
> to their clients, the courts, and to the integrity of the legal system.
> 

I believe that the concerns I have raised are both technical and social.
Your lack of a technical understanding shows in your public press
statements. The same is true of Austin.

> I address the moral, ethical, and political issues raised by Jacob here:
> 
> THE THRESHOLD QUESTION: WHAT LEVEL OF RISK IS ACCEPTABLE, AND UNDER WHAT
> CIRCUMSTANCES?
> 

That is not the question that I raised. Do not put words into my mouth.

> Assume the following:
> 
> For the purpose of this discussion, I would like to assume certain facts
> arguendo:
> 
> 1.) Assume, for the purpose of this discussion, that a Haystack BETA
> tester undergoes a certain risk every time he or she tests the program
> in Iran.
> 

This isn't just an assumption - it's basically a stated goal of your
project.

While we spoke on the telephone, Austin explained to me that Haystack
was being used by high value human rights activists in Iran. Obviously,
if he honestly turned off Haystack, it's quite clear to me that they're
not using it now.

Were you aware of this fact?

Are you also aware that Austin does not actually know exactly who has
access to the Haystack software?

Are you also aware that I have a copy of Haystack?

This is the SHA1 checksum of the file in question:
fb1625745c641e8d4e26c19309358c3601037450

> 2.) Assume, for the purpose of this discussion, that we can quantify
> that risk as follows: using Haystack BETA is safer than using an
> ordinary proxy, but, say, less safe than using a program like Tor.  It
> is somewhere in that range.

I do not think that Haystack is in that range.

> 
> (Again, I do not know the extent to which these statements are true. I
> am no technical expert and that is not the point of this exercise.)
> 

I believe that by disclaiming this you have invalidated your argument.

Haystack is less safe than an ordinary proxy. On a scale of zero to ten,
Haystack is a negative number. The Department of State endorsements, the
technical architecture, the stated intent of the project, and the rest
of the marketing work by the Haystack/Censorship Research team speak to
that scoring.

> The threshold question and Jacob and my differing answers to that question:
> 
> The only question that remains—at least, the moral, ethical, political
> question that remains—is:  assuming premises #1 and #2, above, underg
> what circumstances is it proper to distribute the BETA version of
> Haystack to testers in Iran?
> 
> Jacob’s answer: “under no circumstances may Haystack be distributed if
> #1 and #2 are true.  The acceptable level of risk is determined by
> expert third parties, and not by the tester himself.”  This is my
> anecdotal understanding of Jacob’s position, both from the message Jacob
> previously sent, and from Austin’s description of his conversation with
> Jacob.  (If I misunderstood Jacob’s position, I apologize in advance—do
> not ascribe my reading of that position to Jacob without checking with
> him first.)
> 

This is simply over the top Babak, both in how it is phrased and how
factually incorrect it is.

> My answer: “The level of acceptable risk to a BETA tester of Haystack is
> determined by the tester’s informed consent.”  It is the tester, and not
> a third party, who decides the acceptable risk threshold.  If the risks,
> and the degree to which the risks are unknown, are properly disclosed in
> easy to understand, non-technical terms, then the tester’s act of
> consenting to the risk ends the moral inquiry.
> 

I strongly believe in informed consent. I believe your analysis is
incorrect. While I strongly agree that testers may consent if they
understand the risk, I want to point out two key issues that undermine
your argument:

Haystack has gone out of the way to willfully overstate the Haystack
network and Haystack client capabilities.

Haystack has not maintained control of the Haystack software.

How have you made a confirmation that users are in fact informed before
they have used the software?

> I think Jacob’s answer (if I understand it correctly) is incorrect as an
> intuitive moral matter, as an ethical principle, and as a statement made
> in light of the present political context.  I think my answer is correct
> under all of those criteria, though I think the CRC needs to adjust its
> testing methodology to conform to the model I am proposing.

You do not understand my concerns correctly.

> 
> I will explain my reasons below.
> 
> The applicable ethical rule and its exceptions:
> 
> The correct rule: I think that the correct ethical rule in this context
> is that the level of acceptable risk for any activity depends,
> primarily, on two factors: disclosure and consent.  If the risks are
> disclosed, and the subject can and does consent to those risks, then the
> risk is acceptable.

Neither your nor Austin actually understand the risks. It is absolutely
unreasonable that you would expect anyone to believe that you will be
able to convey risks to testers.

> 
> This is not a novel proposition.  It has its roots in John Stuart Mill’s
> harm principle.  It is based in principles of moral sovereignty and
> human autonomy and dignity.
> 

Your argument is flawed in your basic understanding of my concern and in
my stated stance. Expressing an intellectual relationship to John Stuart
Mill is unintentionally misleading at best.

> We see it expressed everywhere in the law.  In the context of medical
> care, it is called “informed consent.”  In the context of legal
> representation, it is called “consent to potential conflicts of
> interests.”  In the context of the torts law it is often called
> “assumption of risk”; and in the context of product liability it is
> instantiated in the law precluding liability whenever the manufacturer
> properly warns the user of the dangers of a product.  In each case, the
> basic, underlying ethical principle is the same—a risk or conflict is
> acceptable to the extent that it is understood by the risk-taker and
> therefore consented to by that risk taker.
> 
> Exceptions to the rule—activities that should be banned regardless of
> disclosure and consent: There are, of course, some activities which, in
> conventional morality and practice, are banned categorically regardless
> of disclosure and consent.  I can think of two classes of activities.
> 
> Where consent is seen to be impossible:  In the first category, we ban
> an activity because we do not believe one or more of the parties can
> properly consent to the activity.  The ban on the use of certain drugs
> is, at least partly grounded in the fact that the drugs are addictive,
> and thus thought to reduce the user’s ability to consent.  And, to use a
> quintessential example from the law school classroom—ostensibly
> consensual sex between a minor and an adult is banned in many
> jurisdictions on the theory that minors cannot, as a group, properly
> consent to sex with an adult.
> 
> When the activity carries insurmountable negative externalities: The
> other category of activities that are banned in society despite being
> engaged in with full disclosure and consent are activities that are seen
> to have negative externalities.  This is another part of the rationale
> for banning drugs.  Society sees fit to prohibit the use of certain
> drugs, even with disclosure and consent, because society believes that
> drug use has negative effects that go beyond the user’s effect on his
> own body.  The same type of rationale (rightly or wrongly) arguably
> forms the theoretical underpinning for banning prostitution.  The
> consenting actor gets to make his own risk assessment, but he does not
> get to make risk assessments for third parties/society harmed by his
> conduct.  Those third parties/society must be given a chance to make
> their own risk assessments.
> 
> Application to the present case:
> 
> The application of this ethical principle to the question in this case
> is fairly straightforward.  Whether it is proper to distribute Haystack
> BETA to a tester depends only on whether we have properly disclosed the
> risks of using the program and whether the tester has properly consented
> to those risks.

You do not understand the risks. I believe that that puts you, Austin or
frankly, Daniel in no position to express those risks.

What are the risks?

> 
> I want to anticipate one objection to my conclusion: the objection is
> that the average tester in Iran can never properly ascertain the risk of
> a test, because they are not familiar with the technical concepts
> involved.  This objection is essentially modeled on the “impossible
> consent” exceptions to the general rule that I discussed above.
> 
> This objection fails.  In our society and in conventional understanding,
> the lack of technical expertise does not remove the subject’s ability to
> morally consent to an activity or procedure.
> 

Your software specifically makes these claims on the front page of the
website:

"Haystack is a new program designed to provide unfiltered internet
access to the people of Iran. The software package is compatible with
Windows, Mac and Unix systems, and specifically targets the Iranian
government's web filtering mechanisms."

"Haystack is not an ordinary proxy system. It employs a sophisticated
mathematical formula to hide users' real Internet traffic inside a
continuous stream of innocuous-looking requests. In addition to
providing anonymity, Haystack uses strong cryptography, ensuring that
even if users' traffic is detected, it cannot be read. Trying to find
and decipher our users' traffic amidst all the other traffic on the web
really is like trying to find a needle in the proverbial Haystack."

"Once installed Haystack will provide completely uncensored access to
the internet from Iran while simultaneously protecting the user's
identity. No more Facebook blocks, no more government warning pages when
you try to load Twitter or access news sites -- just unfiltered Internet."

There is no sophisticated mathematical formula with how you hide
traffic. The cryptography involved does perhaps partially satisfy a
privacy component required for circumvention. However there is nothing
sophisticated nor mathematical with how Haystack purports to solve the
traffic analysis portion of the circumvention problem. This is merely
hand waving and smoke blowing.

This is quite simply at best, a half truth but frankly, it seems much
worse considering the criticism that people have expressed in the last
twelve months.

Haystack is for all intents and purposes worse than an ordinary proxy
system. I believe this because of the above claims and because of the
actual technical details of the Haystack implementation.

> The quintessential example is medical care, and the doctrine of informed
> consent in the medical field.  A patient can consent to a medical
> procedure even if the patient does not understand the technical details
> of the procedure.  A patient can—and patients do—properly consent even
> to a medical procedure whose risks are unknown.  The patient simply has
> to be informed the extent to which the risks are unknown.

You have failed to control access to Haystack. Your testers are not
informed that they are playing with fire, nor are they informed about
risks that you, Austin, and Daniel have not considered.

> 
> This general principle applies in the same way in the context of
> anti-censorship software.  The risks must be disclosed to the extent
> possible and in a language ascertainable to a non-expert; and the extent
> to which the risks are unknown must also be disclosed.  After a proper
> disclosure, it is consent, and consent alone (i.e., not Jacob, not
> Evgeny, and not anyone else) that determines whether distribution is
> proper.
> 

They have not been disclosed and the entire world has been misled. Your
media interviews, Austin's media interviews, your website, and so forth.

You are correct that consent is absolutely important. However, I think
that you are punting on a very important issue - a user may have the
final say in what they do - Austin, yourself, and others have a
responsibility to the possible users at hand. As far as I have seen up
until Austin turned off Haystack, Haystack was not living up to this
responsibility.

> [The second possible objection is that distribution of anticensorship
> software would have insurmountable negative externalities and must
> therefore be prohibited.  I don’t think anyone has made this objection,
> so I will not address it here.]
> 

I certainly am not making that argument.

> The conclusion: if Haystack’s imposes a risk on the user which is
> somewhere between an ordinary proxy and Tor (again, I have no idea if
> this is true—these are merely assumptions I made at the beginning of the
> argument) then testers who understand this risk (and the description of
> the level of risk should be much, much more specific than what I gave
> earlier) should be permitted to use it without interference from third
> parties.
> 

To reiterate - it does not - it is worse. Your argument is invalid for
this reason and many more.

> Political considerations:
> 
> Several political considerations further support my model over the model
> I am attributing to Jacob.
> 
> First, Jacob’s argument incorrectly treats Iranian testers as passive
> subjects.  It is, in other words, premised on a patronizing moral
> principle: we should decide what level of risk is acceptable to others. 
> As an Iranian, and a person familiar with the situation and the people
> affected by the Iranian government’s brutal repression, I cannot agree
> with this statement.  And as an American, who like most Americans has at
> least a sliver (probably more) of libertarian blood flowing in my veins,
> I have a gut reaction against this type of proposition.
> 

You're making a straw man argument.

I do not treat Iranian testers as passive subjects. I treat possible
testers, Iranian or otherwise, as human beings who are clearly
misinformed, if they are informed on the subject at all.

> I will only say that Iranian testers know the risks of defying the
> government.  They live with that risk every day.  They see their
> brothers and sisters beat in the street, and harassed by police
> officers; they see their government pass laws that make it a crime to do
> things others take for granted; they hear stories of officers breaking
> into dorm rooms and confiscating and destroying computers; and they know
> the risk of using a BETA test of Haystack, a final version of Tor or of
> Freegate.
> 

How have you evaluated this statement?

What is the risk of using a "BETA test of Haystack" in your evaluation?

Do you mean a legal risk or another kind of risk?

> Second, the type of patronizing argument envisioned here—which states
> that a third party must determine the acceptable level of risk Iranian
> testers are willing to take—is generally inconsistent with the
> philosophy underlying the anti-censorship movement.  Anti-censorship is
> premised on the idea that we do not make decisions for others.  It is
> premised on the idea of providing information and providing people with
> the opportunity for in self-determination.  A categorical rule banning
> certain risks is not only inconsistent with basic moral and ethical
> principles, it is inconsistent with the basic philosophical framework
> underlying the anti-censorship movement.

You are correct - this is an extremely patronizing argument. It has also
convinced me to reply and to reveal information that I previously had
not revealed.

I do not support a categorical rule that bans anything relating to the
anti-censorship software development community.

Still - you, Austin and others involved have a responsibility to your
users. I'd also make the argument that you have a responsibility to the
anti-censorship movement.

> 
> What this means for us—proposed compromise:
> 
> I have done what I think is a thorough analysis of the ethical issues
> involved.  There is room for an open debate, and those who know me know
> that I am always willing to change my mind when I see that I am wrong. 
> At this point, however, I think it is fairly clear that the proper model
> for testing Haystack is the model of “informed consent”.
> 

I agree that you may view this as your model but I do not agree that
your model is consistent with reality. Please do check in with Daniel on
this topic - he is well aware of the issues with Haystack and as I
understand it; this is why he has recently distanced himself from Haystack.

> While the CRC generally has this model in mind in its testing
> procedures, I think that we need to have a thorough discussion about
> specific steps that we can take within the organization to adhere to
> fully adhere to this model

I have no doubts about that!

> 
> There are four changes that I propose we make in order to fit this
> informed consent model.
> 
> 1.) Disclaimer.  First, I propose that we include a disclaimer of the
> risks of Haystack—including the unknown risks of Haystack—with every
> BETA copy of Haystack that is distributed.  The disclaimer would be
> presented in the user’s language, and would give the user another
> opportunity to make the risk calculus before using the program.  Second,
> I propose that we be more clear in our materials—absolutely,
> unequivocally clear—that Haystack is in the BETA stage of testing.
> 

What does that mean? Your technical claims are wildly preposterous and
no amount of testing disclaimers changes this fact. Information hiding
is a very real field of study with a significant body of prior work; as
far as I've been able to ascertain, Haystack is not up to snuff with the
rest of the field.

> 2.) Distribution.  We do inform all of our testers that the program is
> risky.  But we have less control over others who obtain the program from
> our testers.  However, Haystack’s functionality allows us to revoke
> access to servers from individual users.  As our second measure, I
> propose that we take the necessary steps, both through directives and
> any technical means available, to ensure that each Haystack BETA tester
> does not distribute the program to others.

Your entire suggestion here is dubious at best. You can identify and
lock out individual users? You are able to count your total users? You
still refuse to consider Kerckhoff's principle and you consider that to
be an improvement?

It's interesting that you have now openly contradicted Haystack's claims
about anonymity.

Anonymity is a quantifiable quality of a system. For example -
"resistance to end to end correlation traffic confirmation attacks" or
perhaps in the case of Haystack "we can identify you individually and
lock you out of the system".

I had not heard the latter used as a definition for the term anonymity.

> 
> 3.) Volume.  The more copies of Haystack BETA are out there, the more
> likely it is that certain users will use the program without informed
> consent.  I propose that we restrict our BETA testing pool to a small
> number—less than 20 users.  Again, my understanding is that there are
> technical means to enforce the limits on the testers.  Before any
> increase in our testing pool, we will take measures to reduce the risks
> posed by the program to users.
> 

You have already failed to do this. If the government of Iran has been
recording traffic for the purposes of traffic analysis, those people are
probably in danger because of your actions.

Using human testing is absolutely unethical without actually defining
your threat model or actually fairly evaluating the risks.

Using humans is also absolutely not needed at this stage - Haystack is
still an infant. Frankly, I think that without a great deal of work to
repair the damage done, it is unreasonable to turn Haystack on again.

> 4) Risk assessment and other measures.  Jacob proposed a number of other
> measures that are technical in nature and may reduce the risks allegedly
> posed by Haystack BETA, or make it easier to disclose some of those
> risks.  To the extent that this is true, and consistent with the
> principles I discussed above, we should take technical measures
> consistent with some of Jacob’s and others’ suggestions.
> 

It is good that you've decided to consider this - it may simply be too
late for something called "Haystack" because of the media attention that
you have generated.

> I think these are reasonable compromises, they will make Haystack BETA
> safer, and I hope that they assuage Jacob’s concerns.
> 

I do not think you are being reasonable as your view of my concerns is
largely incorrect.

> And now to Jacob’s ultimatum—that he will release confidential
> information about the BETA version of a program that is being
> distributed to a few individuals.  I’ve discussed and (I think)
> addressed the moral, ethical, and political arguments underlying Jacob’s
> proposal above; and I think that it is clear from my discussion that I
> take Jacob’s concerns seriously and that I respect his views and the
> views of the tech community.

Again, Austin turned off Haystack before I ever spoke with him. That
places the burden on you, Austin, and the rest of Haystack.

If you turn Haystack on again, I think you're asking for serious trouble
without even having a modicum of understanding regarding the technical
issues or the risks.

> As I mentioned, I think Jacob’s conduct is well-intentioned, but based
> in misguided and patronizing moral and ethical principles.  As an
> Iranian, I have a great deal of experience with those who sincerely
> believe that they can make moral decisions for a community.  I do not
> particularly appreciate this posture.  So I say this to Jacob, and I say
> it as earnestly and gently as I can, with the aim of encouraging further
> debate: a well-intentioned, misguided bully is still a bully. (I do not
> mean to insinuate that there is any moral equivalence between Jacob’s
> ultimatum and the atrocities committed in Iran: there is not; but it is
> an analogy that makes a point).
> 

I find your arguments misguided and patronizing. My ethical framework is
solid and I believe you have misjudged me. This has clearly degraded
into an ad hominem attack.

To call me a bully is quite uncalled for and I do not appreciate it. By
contacting Haystack with my concerns, I have helped Haystack and its users.

Consider the fact that I explicitly told Austin that I wanted to help
him help others for over twelve months. Also consider that I spoke to
him yesterday with an earnest goal of helping, protecting, and enabling
users in Iran.

To insinuate that I have any moral equivalence with the Iranian
authorities speaks to a lack of understanding about my character, my
life's work, my rejection of authoritarian power structures, and my
dedication to freedom of information.

I am entirely dedicated to helping others in a safe way and I reject
using threats as a method of accomplishing that goal.

> Whether we will abide by Jacob’s ultimatum or not will depend on the
> CRC’s continued discussions regarding the technical issues that Jacob
> has raised and will raise with Dan and Austin in the upcoming days.  I
> also believe, from my conversations with Dan and Austin, that we will
> hereto be more open to discussing certain technical aspects of the
> program with the tech community.  I think the measures I have proposed
> will, in the long run, contribute to the safety of Iranian testers of
> Haystack, and perhaps of other anti-censorship software as well.

I have not made the previously claimed ultimatum. It does not seem to be
a reasonable action to re-enable Haystack at this time; this is not
because I will release information or not, it is simply an unsafe protocol.

The burden of proof is on Haystack to demonstrate otherwise.
Technically. With threat models, design documents, and third party
reviews of any software implementation.

> 
> These are all positive developments, and I thank each of you for your
> continued contribution to the debate.

I do not take kindly to the way that you have portrayed me.

I strongly encourage you not to turn the Haystack network on again - you
have a responsibility to humanity to not make things worse than you
already have.

All the best,
Jake



More information about the liberationtech mailing list