[liberationtech] UK academics set to launch 'virus' software for online ‘hate speech’ in time for 2020 election
Yosem Companys
ycompanys at gmail.com
Thu Jan 2 00:33:36 CET 2020
- Researchers at the University of Cambridge have proposed a software
program that treats online “hate speech” like a computer virus.
- Users would be presented with a warning and a “Hate O’Meter” rating
before deciding whether or not to view content that may be regarded as
“hate speech."
EXCERPT:
Researchers at one of the world's oldest universities hope to launch a
technology that allows users to block online "hate speech" much like a
computer virus. Users will be able to decide whether or not they want to
view content with the help of a handy "Hate O'Meter."
Thanks to researchers at the University of Cambridge, the largest social
media companies in the world may soon have the ability to preemptively
quarantine content classified by an algorithm as “hate speech".” On October
14, 2019, researcher Stephanie Ullmann and professor Marcus Tomalin
published a proposal in the Ethics and Information Technology
<https://link.springer.com/article/10.1007/s10676-019-09516-z> journal
promoting an invention that they claim could accomplish this goal without
infringing on individual rights of free speech. Their proposal involves
software that uses an algorithm to identify "hate speech" in much the same
way an antivirus program detects malware. It would then be up to the viewer
of such content to either leave it in quarantine or view it.
Ullmann and Tomalin argue that exposure to online "hate speech" is a type
of harm which “is [as] serious as other sub-types [of harm] (e.g.,
physical, financial)” and social media users deserve protection from such
harm. The proposal states that social media companies’ attempts to combat
"hate speech" have been inaccurate, untimely, and leaves the companies open
to claims of free speech violations. Tomalin argues a middle ground can be
found between those who wish to stop all "hate speech" and those who want
to protect uninhibited First Amendment speech.
[RELATED: Mean words machine? Researchers reveal racial bias in social
media ‘hate speech’ detector] <https://www.campusreform.org/?ID=13888>
Currently, social media companies primarily combat "hate speech" by a
report and review method in which one user reports another for "hate
speech," which is then reviewed by the social media company which then
decides whether or not to censor the poster. Tamlin believes this is not
ideal as it “does not undo the harm that such material has already caused
when posted online . . . it would be far better to intercept potentially
offensive posts at an earlier stage of the process, ideally before the
intended recipient has read them.”
Tomalin's proposal would use a sophisticated algorithm which would evaluate
not just the content itself, but also all content posted by the user to
determine if a post might be classifiable as "hate speech". If not
classified as potential "hate speech", the post occupies the social media
feed like any regular post. If the algorithm flags it as possible "hate
speech", it will then flag the post as potential hate speech, making it so
that readers must opt-in to view the post. A graph
<https://link.springer.com/article/10.1007/s10676-019-09516-z?shared-article-renderer#Fig4>
from
<https://link.springer.com/article/10.1007/s10676-019-09516-z?shared-article-renderer#Fig4>the
proposal illustrates this process...
[...]
https://www.campusreform.org/?ID=14149
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ghserv.net/pipermail/lt/attachments/20200101/2c8bf31e/attachment.html>
More information about the LT
mailing list