[liberationtech] The Invention of "Ethical AI"

R R Brooks rrb at g.clemson.edu
Fri Dec 27 17:20:34 CET 2019


I definitely was not proposing mandatory buy in. Am very concerned about
the black box aspect of the technology. It seems to often serve the purpose of
maintaining current inequities under the guise of mathematical impartiality.

I am extremely skeptical about systems that decide simply from mining data
sets without clear understanding. This seems to be the goal everyone is 
aiming at.

And again adding self selection will only skew the training data more.

For example, we have facial recognition that works great if you are
a white or Asian male.


On December 27, 2019 9:45:47 AM EST, Thomas Delrue <thomas at epistulae.net> wrote:
>I think I understand what you're saying but that leads us to a
>situation
>where you are coerced to be strip-searched for every piece of data you
>have just to prevent the situation of "some computer says no", in other
>words: if you don't participate, we assume the worst. This is better
>known as bullying.
>
>The more I hear about this, the more I am driven towards the conclusion
>that AI (ethical or not) has got nothing to do with technology but more
>and more with legal liability and with control...
>It's an externalizing of blame into some oracle that no-one but a
>handful of priests understands. Yet have no doubt about it: these
>models
>are created and tweaked, just like anything else, to give the result
>that is most desirable; and once achieved, they are locked in. From
>then
>on, you can tell the entire world: "look, we don't know what's going on
>inside the black box and it's too complicated for you to torture your
>little brain about it, so just trust us when we say you should trust
>The
>All Knowing Algorithm and sing the following incantation with me...".
>Does that sound familiar to anyone?
>
>Mandatory participation in technology just to prevent a people-problem
>(because this isn't a tech problem, this is a people problem) is not
>much different from mandatory participation in self-criticism, as is
>practiced in some unsavory places in the world.
>Based on how this data will be used, and we've slid down that slippery
>slope time and time again, this is not in any way different from
>self-criticism because this will be used to deny things, not to grant
>things: give us your data, so we may tell you 'no' in a myriad of
>ways...
>
>On 12/27/19 07:57, R R Brooks wrote:
>> Opting out is likely to add additionally biases into the data, since
>I
>> imagine
>> underrepresented groups being more likely to opt out. Lack of
>> representation
>> is already leading to blacks and women getting substandard medical
>care
>> in USA.
>> 
>> Consider the ethical dimensions that adds.
>> 
>> 
>> 
>> On December 27, 2019 6:36:22 AM EST, carlo von lynX
>> <lynX at time.to.get.psyced.org> wrote:
>> 
>>     On Thu, Dec 26, 2019 at 03:13:27PM -0600, Andrés Leopoldo Pacheco
>Sanfuentes wrote:
>> 
>>         Anonymity doesn’t protect “Particular Social Groups” and
>>         aggregate anonymous data analysis and mining is the basis for
>>         discrimination of entire segments of the population! Like zip
>>         code discrimination. Food deserts. Etc.
>> 
>> 
>>     True, so the total unavailability of private and personal data
>for
>>     centralized analysis wouldn't even be enough as we still have to
>>     work through the dangers of public data. But radical privacy
>sounds
>>     to me like the most important starting point.
>> 
>>     On Thu, Dec 26, 2019 at 1:05 PM John Young <jya at pipeline.com>
>wrote:
>> 
>>         "Ethical" is a marketing, manipulative term, applied to
>exploitive,
>>         deceptive initiatives.
>> 
>> 
>>     I can very well imagine that the marketing individuals
>introducing
>>     the word into their corporate discourse may all be
>well-intentioned.
>>     The problem happens beyond their immediate understanding as
>corporate
>>     structures acting within the capitalist framework cannot
>effectively
>>     act ethically unless all of their costumers are extremily aware,
>>     caring and able to check the effective application of ethical
>values -
>>     and putting such values before their own individual interest.
>> 
>>     In the era of individualism this just isn't happening, therefore
>any
>>     company trying to make things ethical will fall into competitive
>>     disadvantage against those who don't.
>> 
>>     So even if there is a genuine attempt of a company leadership to
>go
>>     ethical, it must be stopped ASAP to stop losing market share. So
>the
>>     leadership will be replaced if it doesn't stop the ethical
>madness
>>     on time.
>> 
>>     In the capitalist system, the only ethical force lies in
>legislation,
>>     government and jurisdiction. The market is structurally unable to
>>     ever act ethically by itself.
>> 
>> 
>> -- 
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>> 

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ghserv.net/pipermail/lt/attachments/20191227/23611edd/attachment.html>


More information about the LT mailing list