[liberationtech] The Invention of "Ethical AI"
Charles M. Ess
c.m.ess at media.uio.no
Tue Dec 24 02:13:11 CET 2019
I'm happily participating in the ethics of AI aspects affiliated with
the The Wallenberg AI, Autonomous Systems and Software
Program–Humanities and Society
<https://wasp-hs.org/>
As the Intercept article notes, some of the AI/ethics work in humanities
and social sciences is more independent of and nuanced than the MIT /
Google, etc.-driven approaches. One can be happy and encouraged about
that - but those larger forces are despicable and obviously difficult to
challenge or counter.
- charles
On 24/12/2019 00:39, Paola Di Maio wrote:
> I have been working on AI Ethics participated in some institutional
> efforts (like IEEE)
> and I can confirm that all the efforts I participated in were piloted
> and superficial, designed to create
> an impression that ethics is a concern, but avoiding and failing totally
> to address it
> I felt I was going mad for a while
> PDM
>
> On Tue, Dec 24, 2019 at 5:57 AM Yosem Companys <ycompanys at gmail.com
> <mailto:ycompanys at gmail.com>> wrote:
>
> I missed this piece on how the field of AI ethics was seemingly
> borne out of tech giants' lobbying and funding of academia. -- YC
>
> ****
>
> I learned that the discourse of “ethical AI”... was aligned
> strategically with a Silicon Valley effort seeking to avoid legally
> enforceable restrictions of controversial technologies. A key group
> behind this effort... made policy recommendations in California that
> contradicted the conclusions of research I conducted with several
> lab colleagues, research that led us to oppose the use of computer
> algorithms in deciding whether to jail people pending trial. ... I
> also watched MIT help the U.S. military brush aside the moral
> complexities of drone warfare, hosting a superficial talk on AI and
> ethics by Henry Kissinger, the former secretary of state and
> notorious war criminal, and giving input on the U.S. Department of
> Defense’s “AI Ethics Principles” for warfare, which embraced
> “permissibly biased” algorithms and which avoided using the word
> “fairness” because the Pentagon believes “that fights should not be
> fair.”
>
> https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/
>
> --
> Liberationtech is public & archives are searchable from any major
> commercial search engine. Violations of list guidelines will get you
> moderated: https://lists.ghserv.net/mailman/listinfo/lt.
> Unsubscribe, change to digest mode, or change password by emailing
> lt-owner at lists.liberationtech.org
> <mailto:lt-owner at lists.liberationtech.org>.
>
>
--
Professor in Media Studies
Department of Media and Communication
University of Oslo
<http://www.hf.uio.no/imk/english/people/aca/charlees/index.html>
Co-chair & Editor, Internet Research Ethics 3.0
<https://aoir.org/reports/ethics3.pdf>
3rd edition of Digital Media Ethics out soon!
<http://politybooks.com/bookdetail/?isbn=9781509533428>
Postboks 1093
Blindern 0317
Oslo, Norway
c.m.ess at media.uio.no
More information about the LT
mailing list