[liberationtech] AI Salon on Deepfakes at Stanford on Fri Apr 12 @ 4 om @ Gates 219

David Majlak davidmajlak at gmail.com
Tue Apr 9 23:59:07 CEST 2019


Hopefully it will be easily understood, that the financing and agendas of
corporate power structures (and lobbyists and politicians etc) will
obviously use deepfakes for their own profit, as all new inventions are, to
the detriment of society. If russians buying ads, and internal communist
party propaganda (like with taiwan avoiding tencent and others) is any
indication of what malicious actors and policies are currently implementing
and capable of, then it should be an easy feat to assume what will be done
with deepfake ability and machine learning.

If you'd like to counter regaining that trust, then it should be made a
point to create and provide society with tools that automatically detect
computationally fabricated imagery vs actual persons and media feeds -
however, with computing power growing exponentially, and quantum (infinite)
computing creeping onto the mainstream market, this will no longer be
possible to protect against for the benefit of society.

An example would be a background application capable of displaying what
consumed media has been created with analog (direct video in reality) and
what has not (manufactured video using GAN) like a browser extension.
Obviously, since existing technology and firmwares, versions, etc make this
impossible to offer a universal application for the benefit of humanity, we
are doomed.

To truly prevent humanity's downfall, we must push for corporate reality to
explain to society what their goals are in real terms and how they are
going to go about those goals - which would eliminate trade secrets in a
way, or any competitive advantage, which will never, ever, ever happen. If
corporations said their goal was x, society would either support or fight
against such things. But given how popular the kardashians still are, i'd
rather know that information from corporations so i know which side of the
alliance to be on when world war 3 breaks out because we are fucked.

A single senator or representative, is supposed to voice his vote on behalf
of millions of individuals? Maybe someone should explain how that's still
logically possible...There are more people in a representative district
than there are opinions.

Another major problem will be, that if we don't have tools, or even
government agencies, having these tools, to be able to tell deepfakes apart
- it will shift from innocent until proven guilty to guilty until proven
innocent. So having said that, perhaps a way to level the playing field is
to give everyone the same tools and equipment that anyone else will have -
making the instructions in laymans terms, allowing anyone to use the same
type of tools that elites can use. Remove the skill barrier in order to
level the playing field down to intent, and maybe we'll see the
resurrection of Anon.

If we can dilute the severity and importance of malicious actors with
idiots, and allow novices to scar the playing field that purposefully
executed plans will, maybe we can lessen the impact of when it happens.

On Tue, Apr 9, 2019 at 5:39 PM Yosem Companys <ycompanys at gmail.com> wrote:

> From: Michelle Lee <mishlee at stanford.edu>
>
> We're very excited to have our first AI Salon this quarter  Friday April
> 12 at 4pm at Gates 219 (our usual time and place), on the topic of
> "Societal Impact of Deepfakes" with Professor Maneesh Agrawala and
> Professor Jeff Hancock. More specifically, we want to discuss how the
> increasing ease of creating fake media with AI might be eroding trust in
> our society and our democratic institutions, how do we regain that trust,
> and whether the harms of Deepfake might be overhyped.
>
> About the Speakers
>
> *Maneesh Agrawala *is the Forest Baskett Professor
> <https://professorships.stanford.edu/content/School%20of%20Engineering>
>  of Computer Science <http://cs.stanford.edu/> and Director of the Brown
> Institute for Media Innovation <http://brown.stanford.edu/> at Stanford
> University. He works on computer graphics, human computer interaction and
> visualization. His focus is on investigating how cognitive design
> principles can be used to improve the effectiveness of audio/visual media.
> The goals of this work are to discover the design principles and then
> instantiate them in both interactive and automated design tools.
>
> *Jeff Hancock* is founding director of the Stanford Social Media Lab
> <https://sml.stanford.edu/> and is a Professor in the Department of
> Communication at Stanford University. Professor Hancock and his group work
> on understanding psychological and interpersonal processes in social media.
> He is well-known for his research on how people use deception with
> technology, from sending texts and emails to detecting fake online reviews.
> His TED Talk
> <https://www.ted.com/talks/jeff_hancock_3_types_of_digital_lies?language=en>on
> deception has been seen over 1 million times.
>
>
> About the Event:
>
> For those of you new to AI Salon: this is a roughly bi-weekly event on
> Fridays where we discuss high-level topics in AI and machine learning,
> with the goal of getting us to think beyond our individual day-to-day
> research and better see how what we do fits into the bigger picture of
> society. We hold these in the spirit of Enlightenment-era salons, with no
> electronics or whiteboard allowed: only free-ranging discussion open to
> all AI lab members, with a 10-minute opening statement at the beginning
> by the two hosts.
>
> The week's AI Salon is held Friday from 4pm to 5pm in Gates 219 (2nd
> floor open area), with wine & cheese served starting at 3:45 - please come
> at by 3:55 pm to get wine & cheese and a seat. We start at 4pm sharp!
>
>
> Best,
>
> AI Salon Team
>
> --
> Liberationtech is public & archives are searchable from any major
> commercial search engine. Violations of list guidelines will get you
> moderated: https://lists.ghserv.net/mailman/listinfo/lt. Unsubscribe,
> change to digest mode, or change password by emailing
> lt-owner at lists.liberationtech.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ghserv.net/pipermail/lt/attachments/20190409/17bad5a8/attachment-0001.html>


More information about the LT mailing list