<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto">“AI Ethics” in my view may refer to at least 2 separate but interdependent “disciplines:”<div><br></div><div>- a branch of ethics that deals with the ethical dilemmas of developing AI, that is, the professional ethics of AI practitioners; and</div><div><br></div><div>- a field of study asking the question: “can a machine be made that has a conscience?” and that is not just a “technical” issue, given that I don’t know if it is clear what “a conscience” is, but it shouldn’t be a problem because I don’t think it’s even clear what “intelligence” is other than reductionist theories that gave us the IQ and similar “metrics!”😈 So people will “assume” a particular definition definition of “conscience” and use it as the reified model to replicate in a machine - that’ll be thtt rt ir “technical issue.” That “reification” becomes then a dilemma to be considered in the first discipline above.</div><div><br></div><div>But I don’t know what Google et alte call “AI Ethics” and haven’t read the article, but from the extract it seems to me that they’re talking about the first discipline, and so there are those “ethicists” that align with warmongers profiteers and marketeers and those that align with their “conscience.” And here it is, that loosely defined term: “conscience!” I prefer the “human rights” angle. </div><div><br></div><div>BTW talking about the second “discipline” the most recent Star Wars movie has C-3PO programmed to not reveal translations of certain language, and it was forced to do so by rewiring it which provoked then a memory wipeout. Memory was “restored” once R2-D2 uploaded a backup..<br><br><div dir="ltr">Regards / Saludos / Grato<div><br></div><div>Andrés Leopoldo Pacheco Sanfuentes</div></div><div dir="ltr"><br><blockquote type="cite">On Dec 24, 2019, at 1:51 AM, Steve Phillips <steve@tryingtobeawesome.com> wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr">There are those of us who sincerely care about aligning AI with human values so that it doesn't make anti-human decisions as it becomes more powerful, and there are serious efforts in the AI safety/AI alignment communities to tackle this extraordinarily difficult technical challenge.<br><br>But "AI ethics" is a much more vague notion, and not one I've seen connected to meaningful action.<br><br>--Steve<br><br><br><br>On Monday, December 23, 2019, Paola Di Maio <<a href="mailto:paola.dimaio@gmail.com">paola.dimaio@gmail.com</a>> wrote:<br>> I have been working on AI Ethics participated in some institutional efforts (like IEEE) <br>> and I can confirm that all the efforts I participated in were piloted and superficial, designed to create<br>> an impression that ethics is a concern, but avoiding and failing totally to address it<br>> I felt I was going mad for a while<br>> PDM<br>> On Tue, Dec 24, 2019 at 5:57 AM Yosem Companys <<a href="mailto:ycompanys@gmail.com">ycompanys@gmail.com</a>> wrote:<br>>><br>>> I missed this piece on how the field of AI ethics was seemingly borne out of tech giants' lobbying and funding of academia. -- YC<br>>><br>>> ****<br>>> I learned that the discourse of “ethical AI”... was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies. A key group behind this effort... made policy recommendations in California that contradicted the conclusions of research I conducted with several lab colleagues, research that led us to oppose the use of computer algorithms in deciding whether to jail people pending trial. ... I also watched MIT help the U.S. military brush aside the moral complexities of drone warfare, hosting a superficial talk on AI and ethics by Henry Kissinger, the former secretary of state and notorious war criminal, and giving input on the U.S. Department of Defense’s “AI Ethics Principles” for warfare, which embraced “permissibly biased” algorithms and which avoided using the word “fairness” because the Pentagon believes “that fights should not be fair.”<br>>><br>>> <a href="https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/">https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/</a> <br>>> --<br>>> Liberationtech is public & archives are searchable from any major commercial search engine. Violations of list guidelines will get you moderated: <a href="https://lists.ghserv.net/mailman/listinfo/lt">https://lists.ghserv.net/mailman/listinfo/lt</a>. Unsubscribe, change to digest mode, or change password by emailing <a href="mailto:lt-owner@lists.liberationtech.org">lt-owner@lists.liberationtech.org</a>.
<span>-- </span><br><span>Liberationtech is public & archives are searchable from any major commercial search engine. Violations of list guidelines will get you moderated: https://lists.ghserv.net/mailman/listinfo/lt. Unsubscribe, change to digest mode, or change password by emailing lt-owner@lists.liberationtech.org.</span></div></blockquote></div></body></html>