[liberationtech] Fwd: Haystack
Evgeny Morozov
evgeny.morozov at gmail.com
Thu Aug 19 23:47:18 PDT 2010
I would like to add another thread to this fascinating discussion: as some
of you may know, Haystack has also been granted a US government
license<http://blog.austinheap.com/anti-censorship-software-licensed-by-us-government-for-export-to-iran/>to
legally distribute their software in Iran (that is, they are not
subject
to the usual set of sanctions-related restrictions on the export of
technology to the country). I also believe that Hillary Clinton mentioned
Haystack - at least in passing - in one of her speeches.
Whatever the merits of Haystack's technology - and I must confess that I'm
in with the most skeptical members of this thread - such endorsement by the
US government may have also given Iranians a false sense of security and at
least some nominal assurance that Haystack has been properly vetted on its
technological merits. (Since it was US Treasury that granted them a license,
one cannot be 100% sure that such vetting actually did take place).
This is not to necessarily bash Haystack, but to point out the
inefficiencies of the current sanctions regime on Iran and the kind of
unintended consequences it creates.
On Fri, Aug 20, 2010 at 12:59 AM, Gabe Gossett <Gabe.Gossett at wwu.edu> wrote:
> I think that on the most important point we appear to agree: developers
> should be aware that overinflated claims of security get people in lots of
> trouble, including killed and tortured.
>
>
>
> I’m not sure if we agree that it is good for people to at least do
> something, though. I get the impression that you would prefer the
> development of these technologies be abandoned entirely since they have the
> potential to lend people a false sense of security. I don’t want to put
> words in your mouth though, so if that is a mischaracterization, you have my
> apologies. Clearly, you have technical expertise and years of experience in
> this area that I do not, and so I would allow that my perspective lacks some
> of ins and outs that might make me more wary. But I remain convinced that
> it is better to do something than nothing. I also think that developers
> making inflated claims and the media misrepresenting an issue are whole
> other problems entirely.
>
>
>
> I also disagree that modern technologies entirely change the paradigm on
> issues like this. Human memory of casual conversations decades past were
> enough to get plenty of Chinese “reeducated” during the Cultural
> Revolution. In fact, I would suggest that journalists and intelligence
> officers have been grappling with this same ethical issue for decades, it
> just wasn’t computerized. Anyhow, I think you and I already had an exchange
> about that on this list and we don’t need to restart that discussion if you
> would prefer not to.
>
>
>
> My ultimate point is that what we are talking about here are tools, and any
> tool has its pluses and negatives. Just because there are people engaging
> in inaccurate hype about what a tool can do does not mean that the rest of
> us should not make them or promote them within responsible limits.
>
>
>
> Also, you have my kudos for a well-done rebuttal with limited time.
>
>
>
> -Gabe
>
>
>
>
>
> *From:* Jim Youll [mailto:jyoull at alum.mit.edu]
> *Sent:* Thursday, August 19, 2010 2:48 PM
> *To:* Gabe Gossett
>
> *Cc:* Liberation Technologies
> *Subject:* Re: [liberationtech] Fwd: Haystack
>
>
>
> I regret that I can't take the time now to write a proper rebuttal, but due
> to other constraints i have to answer this now or never, and never feels
> wrong.
>
>
>
> I assume that ALL "users" of security technology are unqualified to
> evaluate the merits or non-merits, and particularly, the long-term risks of
> using such technology. There are stacks of CHI and other papers discussing
> this. Some of them take an orthogonal approach and you have to read it into
> them. THis has nothing to do with looking down at people in other places. I
> "trust" that my word processor is not sending my poetry to the FBI. that's
> about all I can do. At some point - and it comes pretty early when we're
> talking about software on a computer network - we can't do any more checking
> and have to "trust". The declaration that users are responsible for their
> own well-being forgets that they can't check out big claims because even the
> creators of the software can't really check out some of their own claims.
>
>
>
> i just reviewed a conference paper that purported - yet again - to "fix"
> web site security through an enhanced browser display. That's a TRIVIAL
> problem compared to keeping noisy people safe from their angry governments,
> and we _do not_ even know how to do that correctly, despite 15+ years of
> trying.
>
>
>
> Developers are not good at predicting the long-term implications of using
> their software. I've seen no disclosures yet that rise to the level of sober
> severity and disclaimers in which every one of these technologies should be
> wrapped.
>
>
>
> I know why that is - because people at risk might not use them. As a
> developer, you don't want to discourage people from using your stuff.
>
>
>
> Talking to the wrong person on the street carries a tiny fraction of the
> risk of talking to the wrong person online... in terms of detection after
> the fact (and forever), playback, "evidence against you," and the at-risk
> person's ability to perceive the real or possible risks of the action or to
> have any sense of who's watching either now or who's reviewing the records
> later. I might vary my walking routes and meeting habits to avoid pattern
> analysis, but online much of what might feel like "avoidance" is actually
> disclosure - encrypting e-mails? Uh oh...
>
>
>
> If perhaps people don't use this software because it's too risky to use,
> then maybe that is correct. But I've never seen enough info from publishers
> about why people should "not" use a piece of software, only lots of talk
> about why they should.
>
>
>
> I've seen others try to fault these trusting, non-technical (or even
> somewhat technical) users for their "decision" to use the software, with the
> same claim that they are "accepting" some risks. I say that is absolutely
> unfair and a cheap dodge by developers who like publicity for "human rights
> software" but who really have not taken ownership of the consequences for
> others who may use the software as it was intended to be used.
>
>
>
> As civilian users of software or any other complex machine, is it the case
> that we would look at a piece of software (or any other machinery) that
> seems to do something we want, and say "hmm... this software is endorsed by
> several organizations, and it's written up in the Wall Street Journal....
> maybe it's completely unsuited for the task. I'll avoid it."
>
>
>
> No. We don't do that.
>
>
>
> The further risk is that people who believe they are "more safe" because of
> this software may take greater risks than they would if they were not using
> any software at all. The risk there is very much wrapped up in our
> inability to estimate or compare abstract levels of risk or reward (that's
> why gambling works). The tragic downside is that when there is a failure,
> the user is in a much worse spot than they may have been, when they were
> afraid of being caught, because they've possibly been induced to send more
> messages , or reveal more damning information, than they might have
> otherwise - due to the illusion of safety.
>
>
>
> - jim
>
>
>
>
>
> On Aug 19, 2010, at 12:42 PM, Gabe Gossett wrote:
>
>
>
> “How many of the people known to have been arrested or silenced were
> using, or thought they were using, some kind of 'safe' technology to subvert
> both technological blockades and national laws? Until we know that, should
> we be prescribing these cures to patients we've never met and can't watch
> over?”
>
>
>
> At the risk of going into a similar debate that took place on this listserv
> within the last year . . .
>
>
>
> Is there any way to know how many people have been arrested or silenced
> when using a “safe” technology? Not really. No doubt it has happened
> many times. But I don’t see why that would mean these technologies
> shouldn’t be developed and distributed by Westerners in safe societies with
> access to the means to do so. There is a long history of cat and mouse
> government information blockade circumvention that predates computers. In
> every instance that circumvention information circuit involved unknown
> degrees of risk.
>
>
>
> As long as the developers are honest about the capabilities of their
> applications, and the users have as good an understanding of the risks as is
> possible, I don’t see a problem. I’m speaking on a theoretical level here,
> not about the implementation of any one technology. Haystack may have
> inflated claims about its capabilities and lacks clarity on what they are
> offering (if anything at all), and that is wrong. But, Haystack aside, if
> we waited until we knew for certain whether a technology was entirely safe
> from government prying eyes or not we would just do nothing. If any
> circumvention technology developer is going around claiming that they have
> developed an entirely safe technology, that is wrong. I have a problem,
> though, with implicitly assuming that users in repressive countries are too
> naïve to weigh the risks of trying to get around government barriers. I see
> that implication in the statement above, though perhaps that was not
> intentional.
>
>
>
> I think that there is generally a good point in that statement, but it
> only goes so far. Any user of these technologies is probably already
> putting themselves at risk with their government. Just having a face to
> face conversation with the wrong person, after all, will get you in trouble.
> So if a safe Westerner thinks they can develop something that might give
> people in these countries an edge against a government, then by all means
> let them do it and feel good about it.
>
>
>
> -Gabe
>
>
>
>
>
>
>
> *From:* liberationtech-bounces at lists.stanford.edu [mailto:
> liberationtech-bounces at lists.stanford.edu] *On Behalf Of *Jim Youll
> *Sent:* Thursday, August 19, 2010 10:32 AM
> *To:* Mahmood Enayat
> *Cc:* Liberation Technologies
> *Subject:* Re: [liberationtech] Fwd: Haystack
>
>
>
>
>
> On Aug 19, 2010, at 6:42 AM, Mahmood Enayat wrote:
>
>
>
>
> The big players of circumvention solutions, which have received less
> attention, are all available here: www.sesawe.net , Why Haystack is not
> available online like them?
>
>
>
>
>
> Cat and mouse can be played, yes.
>
> But this technology is looking more and more like merely a way for
> privileged, warm, well-fed, free, safe Westerners to feel good about
> themselves while putting already at-risk populations at even greater risk of
> trouble.
>
>
>
> Laws, guns, and prisons trump technological finesse. Period. This is not
> negotiable.
>
>
>
> Keep in mind that US companies providing equipment to Internet providers
> are also providing access and monitoring capabilities in that equipment...
> at full OC3 speeds...
>
>
>
> How many of the people known to have been arrested or silenced were using,
> or thought they were using, some kind of 'safe' technology to subvert both
> technological blockades and national laws? Until we know that, should we be
> prescribing these cures to patients we've never met and can't watch over?
>
>
>
>
>
> 2002:
>
> "...But Chinese surfers often use proxy servers - websites abroad that let
> surfers reach blocked sites - to evade the Great Red Firewall. Such
> techniques are routinely posted online or exchanged in chat rooms. But
> China's 45 million internet users face considerable penalties if they are
> found looking at banned sites. According to human rights activists, dozens
> of people have been arrested for their online activities on subversion
> charges."
>
> - http://news.bbc.co.uk/2/hi/technology/2234154.stm
>
>
>
> 2006:
>
> ... Those attempting to access these banned sites are automatically
> reported to the Public Security Bureau. Internet police in cities such as
> Xi'an and Chongqing can reportedly trace the activities of the users without
> their knowledge and monitor their online activities by various technical
> means."
>
> - http://www.amnestyusa.org/document.php?id=ENGUSA20060201001
>
>
>
> 2008:
>
> "...Around 30 journalists were known to be in prison and at least 50
> individuals were in prison for posting their views on the internet. People
> were often punished simply for accessing banned websites"
>
> - http://www.amnesty.org/en/region/china/report-2008
>
>
>
> 2010:
>
> "... The ministry of public security said 5,394 people had been arrested
> and that over 9,000 websites had been deleted for having pornographic
> content. The ministry did not say how many people had subsequently been put
> on trial. The authorities released the figures with a warning that its
> policing of the internet would intensify in 2010 in order to preserve 'state
> security'. China maintains strict censorship of the internet in order to
> make sure that unhealthy content, including criticism of the Communist
> Party, does not reach a wide audience."
>
> -
> http://www.telegraph.co.uk/news/worldnews/asia/china/6921568/China-arrests-5000-for-internet-pornography-offences.html
>
>
>
> _______________________________________________
> liberationtech mailing list
> liberationtech at lists.stanford.edu
>
> Should you need to change your subscription options, please go to:
>
> https://mailman.stanford.edu/mailman/listinfo/liberationtech
>
>
>
> _______________________________________________
> liberationtech mailing list
> liberationtech at lists.stanford.edu
>
> Should you need to change your subscription options, please go to:
>
> https://mailman.stanford.edu/mailman/listinfo/liberationtech
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/liberationtech/attachments/20100820/81b346e4/attachment.html>
More information about the liberationtech
mailing list