[liberationtech] The Internet Ideology: Why We Are Allowed to Hate Silicon Valley

Yosem Companys companys at stanford.edu
Sat Nov 16 16:21:42 PST 2013


http://www.faz.net/aktuell/feuilleton/debatten/the-internet-ideology-why-we-are-allowed-to-hate-silicon-valley-12658406.html

11.11.2013

The Internet Ideology

Why We Are Allowed to Hate Silicon Valley

It knows how to talk about tools but is barely capable of talking
about social, political, and economic systems that these tools enable
and disable, amplify and pacify. Why the “digital debate” leads us
astray.

EVGENY MOROZOV

If Ronald Reagan was the first Teflon President, then Silicon Valley
is the first Teflon Industry:  no matter how much dirt one throws at
it, nothing seems to stick. While “Big Pharma,” “Big Food” and “Big
Oil” are derogatory terms used to describe the greediness that reigns
supreme in those industries, this is not the case with “Big Data.”
This innocent term is never used to refer to the shared agendas of
technology companies.  What shared agendas? Aren’t these guys simply
improving the world, one line of code at a time?

Something odd is going on here. While we understand that the interests
of pharmaceutical, food and oil companies naturally diverge from our
own, we rarely approach Silicon Valley with the requisite suspicion.
Instead, we continue to treat data as if it were a special, magical
commodity that could single-handedly defend itself against any evil
genius who dares to exploit it.

Earlier this year, a tiny scratch appeared on the rhetorical Teflon of
Silicon Valley. The Snowden affair helped – but so did other events.
The world seems to have finally realized that “disruption” – the
favorite word of the digital elites –describes a rather ugly, painful
phenomenon. Thus, university professors are finally complaining about
the “disruption” brought on by the massive open online courses
(MOOCs); taxi drivers are finally fighting services like Uber;
residents of San Francisco are finally bemoaning the “disruption” of
monthly rents in a city that has suddenly been invaded by
millionaires. And then, of course, there are the crazy, despicable
ideas coming from Silicon Valley itself: the latest proposal, floated
by one tech executive at a recent conference, is that Silicon Valley
should secede from the country and “build an opt-in society,
ultimately outside the United States, run by technology.” Let’s share
his pain: A country that needs a congressional hearing to fix a
web-site is a disgrace to Silicon Valley.

This bubbling discontent is reassuring. It might even help bury some
of the myths spun by Silicon Valley. Wouldn’t it be nice if one day,
told that Google’s mission is to “organize the world’s information and
make it universally accessible and useful,” we would finally read
between the lines and discover its true meaning: “to monetize all of
the world’s information and make it universally inaccessible and
profitable”? With this act of subversive interpretation, we might
eventually hit upon the greatest emancipatory insight of all: Letting
Google organize all of the world’s information makes as much sense as
letting Halliburton organize all of the world’s oil.

But any jubilation is premature: Silicon Valley still holds a firm
grip on the mechanics of the public debate. As long as our critique
remains tied to the plane of technology and information– a plane that
is often described by that dreadful, meaningless, overused word
“digital” – Silicon Valley will continue to be seen as an exceptional
and unique industry. When food activists go after Big Food and accuse
those companies of adding too much salt and fat to their snacks to
make us crave even more of them, no one dares accuse these activists
of being anti-science. Yet, a critique of Facebook or Twitter along
similar lines – for example, that they have designed their services to
play up our anxieties and force us to perpetually click the “refresh”
button to get the latest update –  almost immediately brings
accusations of technophobia and Luddism.

The reason why the digital debate feels so empty and toothless is
simple: framed as a debate over “the digital” rather than “the
political” and “the economic,” it’s conducted on terms that are
already beneficial to technology companies. Unbeknownst to most of us,
the seemingly exceptional nature of commodities in question – from
“information” to “networks” to “the Internet” – is coded into our
language. It’s this hidden exceptionalism that allows Silicon Valley
to dismiss its critics as Luddites who, by opposing “technology,”
“information” or “the Internet”-- they don’t do plurals in Silicon
Valley, for the nuance risks overwhelming their brains – must also be
opposed to “progress.”

How do you spot “the digital debate”? Look for arguments that appeal
to the essences of things – of technology, information, knowledge and,
of course, the Internet itself. Thus, whenever you hear someone say
“this law is bad because it will break the Internet” or “this new
gadget is good because that’s what technology wants,” you know that
you have left the realm of the political – where arguments are usually
framed around the common good – and have entered the realm of bad
metaphysics. In that realm, what you are being asked to defend is the
well-being of phantom digital gods that function as convenient
stand-ins for corporate interests. Why does anything that might “break
the Internet” also risk breaking Google? This can’t be a coincidence,
can it?

Perhaps, we should ditch the technology/progress dialectic altogether.
“Is it O.K. to be a Luddite?” ran the title of a fabulous 1984 essay
by Thomas Pynchon – a question that he answered, by and large, in the
affirmative.  This question feels outdated today. “Is it okay not to
be a Luddite but still hate Silicon Valley?” is a much better
question, for the real enemy is not technology but the present
political and economic regime – a wild combination of the
military-industrial complex and the out-of-control banking and
advertising – that deploys latest technologies to achieve its ugly
(even if lucrative and occasionally pleasant) ends. Silicon Valley
represents the most visible, the most discussed, and the most naive
part of this assemblage. In short, it’s okay to hate Silicon Valley –
we just need to do it for the right reasons.  Below are three of them
– but this is hardly an exhaustive list.

The rhetoric is as lofty as it is revolutionary

Reason number one:  Silicon Valley firms are building what I call
“invisible barbed wire” around our lives. We are promised more
freedom, more openness, more mobility; we are told we can roam
wherever and whenever we want. But the kind of emancipation that we
actually get is fake emancipation; it’s the emancipation of a
just-released criminal wearing an ankle bracelet.

Yes, a self-driving car could make our commute less dreadful. But a
self-driving car operated by Google would not just be a self-driving
car: it would be a shrine to surveillance – on wheels! It would track
everywhere we go. It might even prevent us from going to certain
places if we our mood – measured through facial expression analysis –
suggests that we are too angry or tired or emotional.  Yes, there are
exceptions – at times, GPS does feel liberating – but the trend is
clear: every new Google sensor in that car would introduce a new lever
of control. That lever doesn’t even have to be exercised to produce
changes in our behavior – our knowledge of its presence will suffice.

Or take MOOCs. They would undoubtedly produce many shifts in power
relations. We know of all the visible, positive shifts: students
getting more, cheaper opportunities to learn; kids in Africa finally
taking best courses on offer in America, and so on. But what about the
invisible shifts? Take  Coursera, a company that was started by a
senior Google engineer and that has quickly become one of the leaders
in the field. It now uses biometrics -- facial recognition and typing
speed analysis – to verify student identity. (This comes in handy when
they issue diplomas!) How did we go from universities with open-door
policies to universities that check their students with biometrics? As
Gilles Deleuze put in a 1990 conversation with Tony Negri, “compared
with the approaching forms of ceaseless control in open sites, we may
come to see the harshest confinement as part of a wonderful happy
past.” This connection between the seeming openness of our
technological infrastructures and the intensifying degree of control
remains poorly understood.

What does this invisible barbed wire mean in practice? Suppose you
want to become a vegetarian. So you go to Facebook and use its Graph
Search feature to search for the favorite vegetarian restaurants of
all your friends who live nearby. Facebook understands that you are
considering an important decision that will affect several industries:
great news for the tofu industry but bad news for the meat section of
your local supermarket.

Facebook would be silly not to profit from this knowledge – so it
organizes a real-time ad auction to see whether the meat industry
wants you more than the tofu industry. This is where your fate is no
longer in your own hands. Sounds silly – until you enter your local
supermarket and your smartphone shows that the meat section offers you
a discount of 20%. The following day, as you pass by the local steak
house, your phone buzzes again: you’ve got another discount offer.
Come in – have some steak! After a week of deliberation – and lots of
cheap meat -- you decide that vegetarianism is not your thing. Case
closed.

Of course, had the tofu industry won the ad auction, things might have
gone in the opposite direction. But it doesn’t matter who wins the
auction. What matters is that a decision that seems fully autonomous
is not autonomous at all. You feel liberated and empowered; you might
even write a thank-you note to Mark Zuckerberg. But this is laughable:
you are simply at the mercy of the highest bidder. And they are
bidding to show you an ad that matters – an ad based on everything
that Facebook knows about your anxieties and insecurities. It’s not
your bland, one-dimensional advertising anymore.

This example is hardly the product of my wild imagination: Last year,
Facebook struck a deal with a company called Datalogix, which would
allow it to tie what you buy at your local supermarket to ads that
Facebook shows you. Google already has an app – Google Field – which
constantly scans shops and restaurants in your area for latest deals.
Nothing in this example hinges upon a hatred of technology or
information: we are dealing here with political economy, advertising,
autonomy. What does this have to do with the “digital debate”? Very
little.

The data-centric model of Silicon Valley capitalism seeks to convert
every aspect of our everyday existence – what used to be our only
respite from the vagaries of work and the anxieties of the marketplace
– into a productive asset.  This is done not just by blurring the
distinction between work and nonwork  but also by making us tacitly
accept the idea that our reputation is a work-in-progress – something
that we could and should be honing 24/7. Therefore, everything is
turned into a productive asset: our relationships, our family life,
our vacations, our sleep (you are now invited to “hack” it so that you
can get most of your sleep in the shortest amount of time).

The rhetoric attached to such “breakthroughs” is as lofty as it is
revolutionary, especially when mixed with subjects like “the sharing
economy.” „This is the first stage of something more profound, which
is the ability of people to structure their lives around doing
multiple sharing economy activities as a choice in lieu of a 9-to-5,
five-day-a-week job,“ said Arun Sundararajan, a professor at New York
University and a big fan of “the sharing economy,” in a recent
interview. „This is technology-driven progress. This is what it’s all
about,“ he added. Oh yes, “progress” has never felt so good: who
doesn’t like working 24-7 instead of 9-5?

When privacy is becoming expensive

Reason number two: Silicon Valley has destroyed our ability to imagine
other models for running and organizing our communication
infrastructure. Forget about models that aren’t based on advertising
and that do not contribute to the centralization of data on private
servers located in America. To suggest that we need to look into other
– perhaps, even publicly-provided alternatives –is to risk being
accused of wanting to “break the Internet.” We have succumbed to what
the Brazilian social theorist Roberto Unger calls “the dictatorship of
no alternatives”: we are asked to accept that Gmail is the best and
only possible way to do email, and that Facebook is the best and only
possible way to do social networking.

But consider just how weird our current arrangement is. Imagine I told
you that the post office could run on a different, innovation-friendly
business model. Forget stamps. They cost money – and why pay money
when there’s a way to send letters for free? Just think about the
world-changing potential: the poor kids in Africa can finally reach
you with their pleas for more laptops! So, instead of stamps, we would
switch to an advertising-backed system: we’d open every letter that
you send, scan its contents, insert a relevant ad, seal it, and then
forward it to the recipient.

Sounds crazy? It does. But this is how we have chosen to run our
email. In the wake of the NSA scandal and the debacle that is
Healthcare.gov, trust in public institutions runs so low that any
alternative arrangement – especially the one that would give public
institutions a greater role – seems unthinkable. But this is only part
of the problem. What would happen when some of our long cherished and
privately-run digital infrastructure begins to crumble, as companies
evolve and change their business models?

Five years ago, one could still publish silly little books with titles
like “What Would Google Do?” on the assumption that the company had a
coherent and mostly benevolent philosophy, eager to subsidize
unprofitable services just because it could. After Google shut down
Google Reader and many other popular services, this benevolence can no
longer be taken for granted. In the next two-three years, there would
come a day when Google would announce that it’s shutting down Google
Scholar – a free but completely unprofitable service – that abets
millions of academics worldwide. Why aren’t we preparing for this
eventuality by building a robust publicly-run infrastructure? Doesn’t
it sound ridiculous that Europe can produce a project like CERN but
seems incapable of producing an online service to keep track of papers
written about CERN? Could it be because Silicon Valley has convinced
us that they are in the magic industry?

Now that our communication networks are in the hands of the private
sector, we should avoid making the same mistake with privacy. We
shouldn’t reduce this complex problem to market-based solutions. Alas,
thanks to Silicon Valley’s entrepreneurial zeal, privatization is
already creeping in. Privacy is becoming a commodity. How does one get
privacy these days? Just ask any hacker: only by studying how the
right tools work. Privacy is no longer something to be taken for
granted or enjoyed for free: you have to expend some resources to
master the tools. Those resources could be money, patience, attention
– you might even hire a consultant to do all this for you – but the
point is that privacy is becoming expensive.

And what of those who can’t afford tools and consultants? How do their
lives change? When the founder of a prominent lending start-up – the
former CIO of Google, no less – proclaims that “all data is credit
data, we just don’t know how to use it yet” I can’t help but fear the
worst. If “all data is credit data” and poor people cannot afford
privacy, they are in for some dark times. How can they not be anxious
when their every move, their every click, their every phone call could
be analyzed to predict if they deserve credit and at what rates? If
the burden of debt wasn’t agonizing enough, now we’ll have to live
with the fact that, for the poor people, anxiety begins well before
they get the actual loan. Once again, one doesn’t have to hate or fear
technology to worry about the future of equality, mobility and the
quality of life. The “digital debate,” with its inevitable detours
into cultural pessimism, simply has no intellectual resources to
tackle these issues.

Where are the apps to fight poverty or racial discrimination?

Reason number three:  the simplistic epistemology of Silicon Valley
has become a model that other institutions are beginning to emulate.
The trouble with Silicon Valley is not just that it enables the NSA
–it also encourages, even emboldens them. It inspires the NSA to keep
searching for connections in a world of meaningless links, to record
every click, to ensure that no interaction goes unnoticed,
undocumented and unanalyzed.  Like Silicon Valley, NSA assumes that
everything is interconnected: if we can’t yet link two pieces of data,
it’s because we haven’t looked deep enough – or we need a third piece
of data, to be collected in the future, to make sense of it all.

There’s something delusional about this practice – and I don’t use
“delusional” metaphorically. For the Italian philosopher Remo Bodei,
delusion does not stem from too little psychic activity, as some
psychoanalytic theories would have it, but, rather, from too much of
it. Delirium, he notes, is “the incapacity to filter an enormous
quantity of data.” While a sane, rational person “has learned that
ignorance is vaster than knowledge and that one must resist the
temptation to find more coherence than can currently be achieved,” the
man suffering from delusion cannot stop finding coherence among
inherently incoherent phenomena. He generalizes too much, which
results in what Bodei calls “hyper-inclusion.”

“Hyper-inclusion” is exactly what plagues America’s
military-industrial complex today.  And they don’t even hide this:
thus, Gus Hunt, the chief technology officer of the CIA, confesses
that “since you can’t connect dots you don’t have …we fundamentally
try to collect everything and hang on to it forever.” Such
hyper-inclusion, according to Bodei, is the prerogative of the
deluded. For them, he writes, “the accidental, which most certainly
exists in the external world, has no right of citizenship in the
psychic one, where it is ‘curved’ to a certain explanation.“ For
example, “a madman might find it significant that three people in a
larger group are wearing a red tie, and might believe that this
implies some form of persecution.” Likewise, the delirious person
believes that “the concept of St. Joseph includes not only the
individual person but also a wooden table since St. Joseph was a
carpenter.” Well, it might be “delusion’ for Bodei but as far as
Silicon Valley and Washington are concerned, we are talking bout “the
semantic Web” and “Big Data”!

Silicon Valley doesn’t care that some of these connections are
spurious. When Google or Facebook mess up and show us an irrelevant ad
based on their misconceived view of who we are, it results in minor
discomfort– and little else.  When NSA or CIA mess up, it results in a
loud drone strike (if you are lucky, you might qualify for an
all-expenses-paid, one-way trip to Guantanamo).

The other problem with Silicon Valley’s epistemology is that its view
of the world is heavily distorted by its business model. Silicon
Valley has two responses to any problem: it can produce more
“computation” (or code) or it can process more “information” (or
data).  Most likely, it will be a combination of the two, giving us
yet another app to track calories, weather and traffic. Such small
successes allow Silicon Valley to redefine “progress” as something
that naturally follows from their business plans. But while “more
computation” or “more information” could be lucrative private
responses to some problems, it doesn’t follow that they are also most
effective responses to the unwieldy, messy public problems have deep
institutional and structural causes.

Much importance, at least in America, is attached to the immense
potential of smartphones to solve a problem like obesity. How would
this work? Well, the idea is that the smartphones already monitor how
much we walk – they have sensors that do that – and they can tell us
when we are walking less than the norm. They can also – perhaps, in
some combination with Google Glass – monitor what we eat and keep
track of our diet, telling us to refuse that tempting dessert. The
assumption here, derived from behavioral economics, is that we make
irrational decisions and that highly-targeted information provided to
us at the right time via this new digital infrastructure can finally
conquer our irrationality.

But notice how, in this case, the very definition of a problem like
obesity shrinks to the neoliberal and the banal: it’s all our own
fault! We are not really trying to solve the problem – only to deploy
our tools – coding and information – to redefine the problem in the
most convenient but also least ambitious way. It may be that if you
are poor  and you work several jobs and you don’t have a car to go
shopping at a farmer’s market, then consuming junk food at a local
McDonald’s is a completely rational decision: you get the food you can
afford. What’s the point of telling you what you already know: that
you are eating cheap and terrible food? The problem that needs
addressing here is that of poverty – to be tackled through economic
reforms – and not that of under-supply of information.

Sociologists have coined a term for this phenomenon: “problem
closure.” To use one recent definition, it refers to “the situation
when a specific definition of a problem is used to frame subsequent
study of the problem’s causes and consequences in ways that preclude
alternative conceptualizations of the problem.” Once the causes and
consequences have been narrowly defined, it’s no wonder that
particular solutions get most attention.  This is where we are today:
inspired by Silicon Valley, policy-makers are beginning to redefine
problems as essentially stemming from incomplete information while
envisioning solutions that only do one thing: deliver more information
through apps.  But where are the apps to fight poverty or racial
discrimination? We are building apps to fix the problems that our apps
can fix – instead of tackling problems that actually need fixing.

Let's re-inject politics and economics into this debate

Do people in Silicon Valley realize the mess that they are dragging us
into? I doubt it. The “invisible barbed wire” remains invisible even
to its builders. Whoever is building a tool to link MOOCs to biometric
identification isn’t much concerned with what this means for our
freedoms: “freedom” is not their department, they are just building
cool tools for spreading knowledge!

This is where the “digital debate” leads us astray: it knows how to
talk about tools but is barely capable of talking about social,
political, and economic systems that these tools enable and disable,
amplify and pacify.  When these systems are once again brought to the
fore of our analysis, the “digital” aspect of such tool-talk becomes
extremely boring, for it explains nothing. Deleuze warned of such
tool-centrism back in 1990:

“One can of course see how each kind of society corresponds to a
particular kind of machine – with simple mechanical machines
corresponding to sovereign societies, thermodynamic machines to
disciplinary societies, cybernetic machines and computers to control
societies. But the machines don’t explain anything, you have to
analyze the collective arrangements of which the machines are just one
component.”

In the last two decades, our ability to make such connections between
machines and “collective arrangements” has all but atrophied. This
happened, I suspect, because we’ve presumed that these machines come
from “cyberspace,” that they are of the “online” and “digital” world –
in other words, that they were bestowed upon us by the gods of “the
Internet.” And “the Internet,” as Silicon Valley keeps reminding us,
is the future. So to oppose these machines was to oppose the future
itself.

Well, this is all bunk: there’s no “cyberspace” and “the digital
debate” is just a bunch of sophistries concocted by Silicon Valley
that allow its executives to sleep well at night. (It pays well too!)
Haven’t we had enough? Our first step should be to rob them of their
banal but highly effective language. Our second step should be to rob
them of their flawed history. Our third step should be to re-inject
politics and economics into this debate. Let’s bury the “digital
debate” for good – along with an oversupply of intellectual mediocrity
it has produced in the meantime.



More information about the liberationtech mailing list