[liberationtech] The Internet Needs J S Mill
Bryant, Tony
A.Bryant at leedsbeckett.ac.uk
Mon Jan 13 12:04:20 CET 2020
The Internet Needs J S Mill
"It is hardly possible to overrate the value, in the present low state of human improvement, of placing human beings in contact with persons dissimilar to themselves, and with modes of thought and action unlike those with which they are familiar." J S Mill<https://www.goodreads.com/quotes/99324-it-s-hardly-possible-to-overstate-the-value-in-the-present>
When Mick 'Crocodile' Dundee<https://en.wikipedia.org/wiki/Crocodile_Dundee> arrives in New York he is told that it is home to seven million people. He replies: 'That's incredible. Imagine seven million people all wanting to live together. Yeah, New York must be the friendliest place on earth<https://getyarn.io/yarn-clip/a0987942-28b2-4191-9746-0cf53910620d>.' Ironic in the extreme given the city's reputation for widespread and gratuitous violence in the 1980s.
In similar fashion one can imagine Tim Berners-Lee anticipating his development of the WorldWideWeb heralding an era of universal friendship and knowledge-sharing, ushering in global cooperation and mutual regard. Yet thirty years on Berners-Lee launched 'a global action plan to save the web from political manipulation, fake news, privacy violations and other malign forces that threaten to plunge the world into a "digital dystopia<https://www.theguardian.com/technology/2019/nov/24/tim-berners-lee-unveils-global-plan-to-save-the-internet>".'
What happened?
The principle of putting everyone in contact with one another, with access to a vast array of information, has gone drastically awry. The development of readily accessible digital media for communication and exchange on a massive scale in the context of an increasingly 'individualized' society has provided a context for proliferation and amplification of some of the most malign aspects of human behaviour.
Most acknowledge the malaise exists, but there is less agreement regarding cause, blame, and responsibility. Many have focused on the technology, more specifically the development of social media, particularly the algorithms<https://www.wired.com/story/not-youtubes-algorithm-radicalizes-people/> that underlie various recommender systems, particularly those connected with Google and Facebook. This is largely a case of the streetlight effect<https://en.wikipedia.org/wiki/Streetlight_effect>, or the drunkard's search for lost keys: people searching for the solution where it is easiest to look.
Algorithms are human artefacts, and it is now widely understood that they encompass and perpetuate the assumptions and biases and cultural backgrounds of their developers. 'Computers learn how to be racist<https://www.vox.com/science-and-health/2017/4/17/15322378/how-artificial-intelligence-learns-how-to-be-racist>, sexist, and prejudiced in a similar way that a child does, ... from their creators<https://www.vox.com/2018/4/3/17168256/google-racism-algorithms-technology>.'
Rebecca Lewis has coined the term Alternative Influence Networks<https://datasociety.net/output/alternative-influence/> to account for the ways in which the recommendation algorithms on YouTube and elsewhere lead users along a path taking them to sites exhibiting increasing forms and levels of extremism and malevolence; particularly racist, homophobic, and misogynist content. Yet this begs the question of how such networks develop in the first place; are they creations of the web or do they originate elsewhere?
Munger and Phillips<https://osf.io/73jys/> have recently argued that these networks are dependent on supply of and demand for such content, both of which exist prior to and outside the confines of the web and any recommender systems. They see the focus on algorithms as misplaced and unlikely to prove effective in combatting online hatred and our downwards plunge into digital dystopia.
Google, Facebook and their ilk certainly need to develop greater awareness of the ways in which their seemingly neutral algorithms perpetuate and amplify existing biases and a wide range of forms of discrimination, prejudice, and hatred. Indeed, each has made commitments to remedy the ways in which their recommender algorithms operate<https://www.washingtonpost.com/technology/2019/01/25/youtube-is-changing-its-algorithms-stop-recommending-conspiracies/>; albeit not always with the intended results<https://mashable.com/2018/05/07/facebook-news-feed-algorithm-fox-news/?europe=true>.
Moreover, some of their efforts to combat online hate have been far more hands-on. Recent accounts by whistle-blowers attest to the ways in which Google employees have intervened to change the ways in which their search algorithm<https://arstechnica.com/tech-policy/2019/11/google-search-results-have-more-human-help-than-you-think-report-finds/> operates. This has included ensuring that queries relating to suicide no longer produce 'how-to manuals' on the first page; instead the top result is now a link to the National Suicide Prevention Lifeline<https://suicidepreventionlifeline.org/>.
Similarly, Google employees made a conscious choice on handling anti-vax messaging<https://www.wsj.com/articles/how-google-interferes-with-its-search-algorithms-and-changes-your-results-11573823753>, so a search for 'how do vaccines cause autism' now gives priority to howdovaccinescauseautism.com<https://www.google.com/search?client=firefox-b-d&q=howdovaccinescauseautism.com>, which simply states in large black letters, 'They f-ing don't.'
These are laudable examples, but this covert trend for Google employees deciding how and whether information should appear in people's searches might not be welcome as a universal strategy. Certainly, Google and Facebook need to respond to the shortcomings of their recommender systems, but this needs to be done in an open and participatory manner. Not surprisingly their efforts have been derided as censorship from both right<https://www.washingtonpost.com/politics/2019/08/01/are-google-facebook-censoring-conservatives-problem-is-more-widespread-than-that/> and left<https://www.mintpressnews.com/facebook-and-google-are-censoring-us-heres-how-you-can-help/237159/>.
So where does this leave Berners-Lee's call for 'a contract for the web<https://contractfortheweb.org/>', an initiative specifically aimed at harnessing a wide variety of contending interests to work together for the common good and general benefit. Harnessing the goodwill and benevolent intentions of governments, companies, and citizens to work together for the common-good. Google and Facebook, owners of the major social media sites, have signed up to the contract; a supreme irony in many people's eyes.
Let's for the moment accept that recommender systems can act as a conduit; channelling hatred, either as cause or supplementing factor. Let's also accept that efforts by Google and Facebook, although probably well-intended, will and should prompt the question 'quis custodiet ipsos custodes<https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%3F>?' So, what could Google and Facebook do to indicate their commitment to Berners-Lee's venture?
One possibility would be to build on their earlier hands-on efforts, but more transparently, incorporating representatives from the other constituent interests encompassed by the contract. But this will need to be undertaken with a different orientation, combatting 'the filter bubble<https://en.wikipedia.org/wiki/Filter_bubble>' - i.e. 'the intellectual isolation that can occur when websites make use of algorithms to selectively assume the information a user would want to see.<https://www.techopedia.com/definition/28556/filter-bubble> This will involve trying to ensure that recommender algorithms offer links to contending and divergent ideas, allowing the possibility that users will have some 'contact with people dissimilar to themselves'. There will of course be contention regarding what this strategy involves, and how it is best enacted, but at least it might begin to lift us out of 'the present low state of human improvement' towards something more akin to Crocodile Dundee's - and Berners-Lee's - model of friendliness.
Tony Bryant
Antony Bryant
Professor of Informatics, Leeds Beckett University
Headingley Campus, Leeds LS6 3QS, UK
phone +44 (0) 113 812 7422
skype tonybryantleedsbeckett
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ghserv.net/pipermail/lt/attachments/20200113/c1de969d/attachment-0001.html>
More information about the LT
mailing list