[liberationtech] The worrisome trend toward liability in networking technology

Daniel Colascione dan.colascione at gmail.com
Mon Nov 7 13:18:54 PST 2011


I've read with interest the recent discussion on this list of the
ramifications of the use of western filtering technology by oppressive
regimes. While the problem warrants serious discussion, I feel that
the direction of remedies is veering dangerously and I'd like to adopt
a bit of a contrary position on the nature of an appropriate response:

We're all outraged by the use by oppressive regimes of western
filtering and network monitoring technology. But we should pause
before acting on this outrage: the situation is more subtle than might
be immediately apparent.  Oppressive regimes don't censor for a
profit: the filtering and monitoring are existential and ideological
imperatives. While these regimes (like all rational actors) seek to
minimize costs, they would continue to censor even at greatly
increased prices. Of course, there is some price at which these
regimes would discontinue censorship: but, as the failure of
conventional economic sanctions to discourage other behavior has
shown, oppressive regimes are nowhere near being inconvenienced enough
to change their behavior.

If the efforts discussed here were successful and oppressive regimes
were somehow cut off from western filtering technology, we wouldn't
see an end to Internet censorship: Instead, we would see regimes
invest in domestically-developed solutions, which are actually less
expensive than one might think: anyone can create professional-quality
software for free, and hardware can be manufactured with a little
capital investment. Iran has already made headway in this area. All
the sanctions, codes of conduct, lawsuits, and so on we've been
discussing here amount to an increase in the cost of censorship, and
oppressive regimes simply aren't cost-bound at this point. Now, all
things being equal, the world becomes a better place as censorship
becomes more expensive. If all things were equal, I would support the
ideas being raised on this list.

But not all things are equal: the measures under discussion have
serious externalities that warrant discussion, and they could affect
the entire technology industry.

1) How do we decide what technology is subject to regulation? Network
technology is quintessentially dual-use: anyone who has administered a
network knows that the same features that allow us to block outbound
attacks, accelerate the web via transparent caching, scan our servers
for vulnerabilities, detect cross-site scripting attacks, and scan our
email for spam can also be used for malicious ends. Cluster bombs and
Saturday night specials have no legitimate purpose, but proxies,
firewalls, and logging tools have many.

The only reasonable way to draw a line between filtering and general
network technology is to use intent, which is a notoriously vague
standard. If marketing censorship technology were to become attached
to serious liability issues, vendors would develop software with
precisely the same features and market them with "wink, wink, nudge,
nudge" references to benign network management.

2) Where does responsibility end? Brett Solomon suggests that we
attach liability to technology, but the concept isn't entirely clear.

Consider Blue Coat: the exact mechanism by which Syria acquired Blue
Coat's technology remains unclear. If Blue Coat knowingly sold this
technology to Syria, it ought to be prosecuted under applicable
laws. But if it turns out that Syria indirectly obtained Blue Coat's
technology through the gray market or outright piracy, I feel it'd be
difficult to attach legal or moral blame to the company.

Let's suppose we do hold Blue Coat liable for Syria's use of its
products because it provided automated software updates or other
routine and free support: to avoid future liability, companies would
be forced to consult blacklists and prevent certain IP address ranges
from downloading updates, manuals, and other materials. (Or they could
be required to go through a more robust process to verify the
nationality of anyone requesting support.)

These restrictions would be wholly ineffective --- any marginally
competent network administrator could obtain support via third parties
and proxy servers, but every networking firm would need to bear the
expense of implementing these restrictions. Would we propose that
companies use a more robust process to verify the nationalities of
those requesting support?

After the failure of these measures, what would be next? Would we
compel networking vendors to include "kill switches" that could be
used to remotely disable network technology remotely if it's found in
an unapproved context?  Should devices refuse to function without
continuous approval of the original manufacturer?

Never mind the financial cost of these measures: instead, imagine the
potential for misuse. What happens when bad actors inevitably
commandeer these safeguards to cause mischief? What happens when
governments force companies to cut off support for (and perhaps
disable outright) network equipment in a rival nation?

3) What about open source software: do we subject it to the same
regulations, effectively destroying the tradition of openness that
fosters its development?  Or do we exempt open source software from
regulation, allowing oppressive regimes to access filtering technology
almost as easily as before, but penalizing companies that try to
innovate with new networking technology they'd like to keep
proprietary. Should downloading Squid or Nessus warrant a background
check? The result of such a scheme would be a retroactive,
self-inflicted defeat in the crypto wars of the 1990s.

When we ask for regulation of filtering technology, we're asking for
the very dystopia we're all trying to avoid. Brett Solomon's proposal
for "human rights by design" is in fact "oppression by
design". Because software cannot judge the moral orientation of the
bytes it processes, the only mechanism with which we can enforce
enforce human rights is authoritative third party control, which can
be used to do more evil than it can ever do good.

Look: it makes sense to enforce simple, clear measures to disrupt
censorship and surveillance: we already effectively prohibit direct
transfer of technology to oppressive regimes, as anyone who has dealt
with OFAC can attest. But the increasingly strenuous measures proposed
here, e.g. kill switches and licensing, come with diminishing returns
and serious costs to our budgets and liberties, and any "victory"
would be Pyrrhic.

Most participants in this discussion have analogized filtering
software to weapons technology, but the better analogy is drug
prohibition: both drugs and filtering technology can be produced
without massive capital investment; both can be produced clandestinely
(albeit at higher cost relative to commercial production); both are in
high demand; both are easily moved on black and gray markets; both
often have perfectly legitimate uses impossible to distinguish from
illicit ones; both may very well have important adverse effects on
society.  Most importantly, in both cases, attempts to crack down on
trade simply moves the trade underground while simultaneously causing
collateral damage to society as a whole.

I may attract some flames for the sentiment, but I believe that
outrage over the use of western filtering technology is a red
herring. Our resources are better spent on 1) circumventing of the
filtering that will inevitably be put in place, and 2) activism that
will reduce the demand for censorship and surveillance in the first
place.


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 235 bytes
Desc: OpenPGP digital signature
URL: <http://mailman.stanford.edu/pipermail/liberationtech/attachments/20111107/b5cc6cd1/attachment.asc>


More information about the liberationtech mailing list