[liberationtech] OpenAI adds Trump-appointed former NSA director to its board
Lina Srivastava
lina at transformationalchange.co
Wed Jun 19 23:28:42 CEST 2024
Hi all,
Thanks for the discussion. A few months ago I wrote a piece for SSIR
<https://ssir.org/articles/entry/ai-building-community-governance#>
advocating for civil society funders and orgs to address tech co power
consolidation through community-led governance. This speaks to
accountability more broadly than to specifics of the technology/algorithms,
so I'm not sure if this directly answers your questions, Kate, but sending
it in case it is of interest.
Lina
On Tue, Jun 18, 2024 at 11:29 PM Paola Di Maio <paoladimaio10 at gmail.com>
wrote:
> Kate
> thanks for bringing up the questions, which make sense
> But technically, they may be *'ill posed' *(imho)
> That is because there is a mixup and overlap in
> terminology/concepts/implementations adopting the same
> terminology applied to different concepts etc
>
> All algorithms are in principle auditable even when they are proprietary,
> and the only way companies can maintain their
> competitive advantage is by keeping he algorithms proprietary, or de
> facto, a trade secret
> You cannot make any laws against trade secrets afaik
> Some of these algorithms are useful and amazing even, technically
> but for example, I started to notice that when I leave a whatsapp message
> to someone
> the content of my message is picked and turns up into the adveritisng on
> FB and in turn
> via some agreement that I may not know about, it turns up in adverts on
> youtube, google search etc
>
> To what extent are the search results that I obtain skewed based on my
> user profile, which is in turn based on my login credentials, which is in
> turn based on the apps/web services that i use?
> I would say it's a lot skewed. how so? by a mixture of algorithms ,
> commercial agreements, trade secrets which are all legal
>
> I think one face of the blockchain may be to disrupt this entanglement by
> encryptions and fragmentation
> but the reality, is that the master key is only visible to some, and THEY
> are building the machine, in the name of democratization of the internet,
> go figure
>
> My advice would be, start auditing individual functions
> (input-process-output) for each task/app
> then build the map of the ecosystem entanglement from there, keeping in
> mind that by means of generative algorithm
> the map is constantly reconfiguring itself, and not traceable (a property
> of the blockchain, auch)
> and NOT REPLICABLE (a property of generative algos)
>
> Very very thorny entanglement, the best we can do is to stay on top of
> things
> (scratching head)
>
>
>
> On Wed, Jun 19, 2024 at 4:52 AM Kate Krauss <katiephr at gmail.com> wrote:
>
>> Hi,
>>
>> I'm trying to understand the lay of the land.
>>
>> So, generative AI company algorithms are proprietary, like Facebook's and
>> Tiktok's have been all along. Companies still aren't sharing algorithms
>> with researchers, even if they sign a non-disclosure agreement (still
>> true?). If we can't see it, we can't analyze it, regulate it, amend it, or
>> make it accountable. I've always been surprised that people don't leak
>> them.
>>
>> Companies could be compelled to make their algorithms more transparent if
>> there were a law that requires it, but so far there's no law.
>>
>> Paola, if your field is algorithmic auditability, do you ever see
>> proprietary algorithms? If so, how?
>>
>> Also:
>>
>> Earlier today Lina Khan, head of the US Federal Trade Commission, tweeted:
>> --- Today @FTC <https://x.com/FTC> referred its case against TikTok to
>> the Civil Division at
>> @TheJusticeDept <https://x.com/TheJusticeDept>
>> . Our investigation found reason to believe that TikTok is violating or
>> about to violate the FTC Act and the Children’s Online Privacy Protection
>> Act (COPPA).
>> ----
>> As a complete non-lawyer, I was interested to see that TikTok was getting
>> in trouble partly because the FTC believes they are* about to violate
>> these laws. * Users are about to get injured by Tiktok, predicts the
>> FTC. I didn't know a company could be sued for something it hasn't done
>> wrong yet.
>>
>> If so, could this apply to generative AI companies?
>>
>> Is there a lawyer who might answer that question?
>>
>> -Kate
>>
>> ps: Here's the link for such a lawyer to sign up for this list:
>> https://lists.ghserv.net/mailman/listinfo/lt and for us, here are short
>> explanations of the FTC Act
>> <https://www.ftc.gov/legal-library/browse/statutes/federal-trade-commission-act>
>> and COPPA
>> <https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa>
>> .
>>
>>
>> On Tue, Jun 18, 2024 at 12:36 AM Kate Krauss <katiephr at gmail.com> wrote:
>>
>>> Yes, that's an interesting idea, Hans.
>>>
>>> Former NSA chief Keith Alexander, who has a history of lying about
>>> spying on Americans, is on Amazon's board.
>>>
>>> -Kate
>>>
>>> On Tue, Jun 18, 2024 at 12:21 AM Klein, Hans K <hans at gatech.edu> wrote:
>>>
>>>> The case of OpenAI is one instance of a general trend in which national
>>>> security agencies overlap with IT/media corporations.
>>>>
>>>>
>>>>
>>>> The same thing happened at Twitter, I believe:
>>>> https://twitterfiles.substack.com/p/1-thread-the-twitter-files
>>>>
>>>>
>>>>
>>>> It would be quite useful and interesting for someone to perform some
>>>> non-partisan research on such ties in general.
>>>>
>>>>
>>>>
>>>> Hans Klein
>>>>
>>>> Georgia Tech
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *From:* LT <lt-bounces at lists.liberationtech.org> *On Behalf Of *Paola
>>>> Di Maio
>>>> *Sent:* Monday, June 17, 2024 10:46 PM
>>>> *To:* Isaac M <isaac.mao at gmail.com>
>>>> *Cc:* cat.zakrzewski at washpost.com; Kate Krauss <katiephr at gmail.com>;
>>>> LT <lt at lists.liberationtech.org>; gerrit.devynck at washpost.com; Andrés
>>>> Leopoldo Pacheco Sanfuentes <alps6085 at gmail.com>
>>>> *Subject:* Re: [liberationtech] OpenAI adds Trump-appointed former NSA
>>>> director to its board
>>>>
>>>>
>>>>
>>>> Thank you Kate for bringing up this issue here
>>>>
>>>> How do you think this should be tackled? My work is in algorithmic
>>>> auditablity, awareness and explainability
>>>>
>>>> trying to develop more understanding and possibly standards
>>>>
>>>> what do people suggest?
>>>>
>>>>
>>>>
>>>> *Note for Sawsan: I think the reference to the president here was
>>>> purely related to the person being part of that administration at the time?*
>>>>
>>>>
>>>>
>>>> *Paola Di Maio W3C AI KR CG*
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Jun 18, 2024 at 4:41 AM Isaac M <isaac.mao at gmail.com> wrote:
>>>>
>>>> We should never place our hopes on company boards functioning in the
>>>> public interest. The recent debacles at Boeing and Tesla demonstrate this.
>>>> In Tesla's case, the board and shareholders with meme greed have only
>>>> indulged Elon Musk, further bolstering his feudalistic tendencies.
>>>>
>>>>
>>>>
>>>> On Tue, Jun 18, 2024 at 8:19 AM Kate Krauss <katiephr at gmail.com> wrote:
>>>>
>>>> So OpenAI has a conflicted mission, a weak board, an insanely
>>>> risky goal, and no accountability (am I missing something?). Oh right,
>>>> their product is evolving at a million miles an hour.
>>>>
>>>> They've shed many of the staff and board members who cared most about
>>>> safety.
>>>>
>>>>
>>>>
>>>> Microsoft, their funder, could reign them in but it is motivated
>>>> instead to egg them on. And now they've got a board member with very close
>>>> ties to two US presidents and one of the world's most powerful spy
>>>> agencies. The keys are on the table, as Juan Benet would say.
>>>>
>>>>
>>>>
>>>> I don't think OpenAI could be getting more press coverage--the coverage
>>>> has been near-constant and pretty responsible.
>>>>
>>>>
>>>>
>>>> Are the NGOs working on this having any luck?
>>>>
>>>>
>>>>
>>>> -Kate
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Sun, Jun 16, 2024 at 12:27 PM Andrés Leopoldo Pacheco Sanfuentes <
>>>> alps6085 at gmail.com> wrote:
>>>>
>>>> Sorry but “accountability” runs afoul of profit so many times, and the
>>>> “mission” of OpenAI is DoubleSpeak:
>>>>
>>>>
>>>>
>>>> OpenAI is an AI research and deployment company. Our mission is to
>>>> ensure that artificial general intelligence benefits all of humanity.
>>>>
>>>>
>>>>
>>>> Regards / Saludos / Grato
>>>>
>>>>
>>>>
>>>> Andrés Leopoldo Pacheco Sanfuentes
>>>>
>>>> Pronouns: He/Him/They/Them (equal preference)
>>>>
>>>>
>>>>
>>>> On Jun 16, 2024, at 10:52 AM, Kate Krauss <katiephr at gmail.com> wrote:
>>>>
>>>>
>>>>
>>>> Hi,
>>>>
>>>>
>>>>
>>>> There is currently no accountability for the decisions at OpenAI, to my
>>>> knowledge. What has to happen for that to change? The board is not working.
>>>>
>>>>
>>>>
>>>> How can the company be held accountable? I'm especially interested in
>>>> the thoughts of policy people and lawyers on this list. And yes, choosing
>>>> a spy chief for the board is a big red flag.
>>>>
>>>>
>>>>
>>>> Sincerely,
>>>>
>>>>
>>>>
>>>> Kate
>>>>
>>>>
>>>>
>>>> On Sat, Jun 15, 2024 at 12:16 AM Sawsan Gad <sawsangad at gmail.com>
>>>> wrote:
>>>>
>>>> Hello friends —
>>>>
>>>>
>>>>
>>>> I was so happy when Liberationtech was resurrected, and of course a
>>>> former head of NSA on AI is something that needs to covered and discussed.
>>>>
>>>>
>>>>
>>>> However, I hope we’re not quickly degenerating into Trump-this
>>>> Trump-that (and sensationalizing the title, only to realize the guy “was
>>>> asked to continue under Biden” buried deep down inside). (!)
>>>>
>>>>
>>>>
>>>> Journalists may need to do this kind of (… work..?) to keep their jobs
>>>> — god knows for how long. Normal people, not so much.
>>>>
>>>>
>>>>
>>>> People are working very hard to restore a basic level of trust among
>>>> family and friends, after the several political and civil abuses of the
>>>> last few years. Let’s please keep good spirits and stay relevant on the
>>>> things that we all care about, and not assume political leanings of others,
>>>> and that magic words will evoke certain reactions à la Pavlov.
>>>>
>>>>
>>>>
>>>> Now, back to discussing OpenAI. :)
>>>>
>>>> (Sorry Kate if that’s too forward. All respect to you, thank you for
>>>> sharing the article).
>>>>
>>>>
>>>>
>>>> Sawsan Gad
>>>>
>>>> PhD student - Geoinformatics
>>>>
>>>> George Mason University
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, Jun 14, 2024 at 8:05 PM Kate Krauss <katiephr at gmail.com> wrote:
>>>>
>>>> Sam Altman, one of AI's most important leaders--at least for now--is a
>>>> man with incredible contacts, wonderful social skills, and apparently few
>>>> scruples. Appointing the former head of the NSA to OpenAI's board
>>>> demonstrates that this company is unaccountable. This company puts
>>>> Americans--and everybody else in the world--at risk.
>>>>
>>>>
>>>>
>>>> How can OpenAI be made accountable? The stakes are so high. Its board
>>>> has already failed to contain it.
>>>>
>>>>
>>>>
>>>> Not even the worst part of this, but new board member Nakasone's hobby
>>>> horse is that the US must out-compete China in generative AI.
>>>>
>>>>
>>>>
>>>> -Kate
>>>>
>>>>
>>>>
>>>> ps: What happens at OpenAI if Trump is re-elected?
>>>>
>>>>
>>>>
>>>>
>>>> *Washington Post: OpenAI adds Trump-appointed former NSA director to
>>>> its board *
>>>> Paul M. Nakasone joins OpenAI’s board following a dramatic shakeup, as
>>>> a tough regulatory environment pushes tech companies to board members with
>>>> military expertise.
>>>>
>>>> By Cat Zakrzewski and Gerrit De Vynck
>>>> Updated June 14, 2024 at 12:16 p.m. EDT|Published June 13, 2024 at 5:00
>>>> p.m. ED
>>>>
>>>>
>>>>
>>>> The board appointment of retired Army Gen. Paul M. Nakasone comes as
>>>> OpenAI tries to quell criticism of its security practices. (Ricky
>>>> Carioti/The Washington Po
>>>>
>>>> OpenAI has tapped former U.S. Army general and National Security Agency
>>>> director Paul M. Nakasone to join its board of directors, the continuation
>>>> of a reshuffling spurred by CEO Sam Altman’s temporary ousting in November.
>>>>
>>>> Nakasone, a Trump appointee who took over the NSA in 2018 and was asked
>>>> to continue in the role under Biden, will join the OpenAI board’s Safety
>>>> and Security Committee, which the company stood up in late May to evaluate
>>>> and improve its policies to test models and curb abuse.
>>>>
>>>> The appointment of the career Army officer, who was the longest-serving
>>>> leader of U.S. Cybercom, comes as OpenAI tries to quell criticism of its
>>>> security practices — including from some of the company’s current and
>>>> former employees who allege the ChatGPT-maker prioritizes profits over the
>>>> safety of its products. The company is under increasing scrutiny following
>>>> the exodus of several key employees and a public letter that called for
>>>> sweeping changes to its practices.
>>>>
>>>> “OpenAI occupies a unique role, facing cyber threats while pioneering
>>>> transformative technology that could revolutionize how institutions combat
>>>> them," Nakasone told the Post in a statement. "I am looking forward to
>>>> supporting the company in safeguarding its innovations while leveraging
>>>> them to benefit society at large.”
>>>>
>>>> Amid the public backlash, OpenAI has said it is hiring more security
>>>> engineers and increasing transparency about its approach to securing the
>>>> systems that power its research. Last week, a former employee, Leopold
>>>> Aschenbrenner, said on a podcast that he had written a memo to OpenAI’s
>>>> board last year because he felt the company’s security was “egregiously
>>>> insufficient” to stop a foreign government from taking control of its
>>>> technology by hacking.
>>>>
>>>> Security researchers have also pointed out that chatbots are vulnerable
>>>> to “prompt injection” attacks, in which hackers can break in to a company’s
>>>> computer system through a chatbot that is hooked up to its internal
>>>> databases. Some companies also ban their employees from using ChatGPT out
>>>> of concern that OpenAI may not be able to properly protect sensitive
>>>> information fed into its chatbot.
>>>>
>>>> Nakasone joins OpenAI’s board following a dramatic board shake-up. Amid
>>>> a tougher regulatory environment and increased efforts to digitize
>>>> government and military services, tech companies are increasingly seeking
>>>> board members with military expertise. Amazon’s board includes Keith
>>>> Alexander, who was previously the commander of U.S. Cyber Command and the
>>>> director of the NSA. Google Public Sector, a division of the company that
>>>> focuses on selling cloud services to governments, also has retired generals
>>>> on its board. (Amazon founder Jeff Bezos owns The Washington Post.)
>>>>
>>>>
>>>> Until January, OpenAI had a ban on the use of its products for
>>>> “military and warfare.” The company says the prohibition was removed to
>>>> allow for military uses that align with its values, including disaster
>>>> relief and support for veterans.
>>>> “Our policies have consistently prohibited the use of our tools
>>>> including our API and ChatGPT to ‘develop or use weapons, injure others or
>>>> destroy property,’” OpenAI spokesperson Liz Bourgeois said. “That has not
>>>> changed.” Nakasone did not respond to a request for comment.
>>>>
>>>> Nakasone brings deep Washington experience to the board, as the company
>>>> tries to build a more sophisticated government relations strategy and push
>>>> the message to policymakers that U.S. AI companies are a bulwark against
>>>> China.
>>>> “We want to make sure that American companies ... have the lead in the
>>>> innovation of this technology, I think the disruptive technology of this
>>>> century,” Nakasone said when asked about AI during a recent Post Live
>>>> interview.
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> --
>>>> Liberationtech is public & archives are searchable. List rules:
>>>> https://lists.ghserv.net/mailman/listinfo/lt. Unsubscribe, change to
>>>> digest mode, or change password by emailing
>>>> lt-owner at lists.liberationtech.org.
>>>>
>>>> --
>>>> Liberationtech is public & archives are searchable. List rules:
>>>> https://lists.ghserv.net/mailman/listinfo/lt. Unsubscribe, change to
>>>> digest mode, or change password by emailing
>>>> lt-owner at lists.liberationtech.org.
>>>>
>>>> --
>>>> Liberationtech is public & archives are searchable. List rules:
>>>> https://lists.ghserv.net/mailman/listinfo/lt. Unsubscribe, change to
>>>> digest mode, or change password by emailing
>>>> lt-owner at lists.liberationtech.org.
>>>>
>>>> --
>>>> Liberationtech is public & archives are searchable. List rules:
>>>> https://lists.ghserv.net/mailman/listinfo/lt. Unsubscribe, change to
>>>> digest mode, or change password by emailing
>>>> lt-owner at lists.liberationtech.org.
>>>>
>>>> --
> Liberationtech is public & archives are searchable. List rules:
> https://lists.ghserv.net/mailman/listinfo/lt. Unsubscribe, change to
> digest mode, or change password by emailing
> lt-owner at lists.liberationtech.org.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ghserv.net/pipermail/lt/attachments/20240619/e82f09af/attachment.htm>
More information about the LT
mailing list