<div dir="ltr"><div>There is a lot to say</div><div>I have been studying this system for decades, the machine is already in place</div><div>THEY use it to make determination about reality</div><div>The fact is that, there is knowledge misrepresentation (facts are stated are either not true, or biased, or partially true)</div><div>The logic of inference is also incorrect/biased (for reasons of complexity)</div><div>The Blockchain is a mechanism to obfuscate how the whole process is taking place by fragmentation</div><div>and hiding the keys, except for those who hold the masterkey</div><div>Studying AI and understanding how it is being used/deployed is our only hope to counter its power and effect</div><div>its about survival of humankind</div><div>General public, politicians, policy makers, are given jobs but not the knowledge that they need to <br></div><div>do their policy/AI governance jobs properly</div><div>The machine knows how to design loopholes into the legislation/governance/policy and the entire</div><div>policy making is a minefield (takes a lot of effort to develop and implement, but it is not effective and those who</div><div>designed it with flaws know how to work around the loopholes even before the legislation is in place)<br></div><div>Those who understand what is going on are ejected/persecuted by the machine<br></div><div><br></div><div><a href="https://www.youtube.com/watch?v=fn1R-5p_j5c">https://www.youtube.com/watch?v=fn1R-5p_j5c</a></div><div><br></div><div>Discuss?<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Jun 18, 2024 at 4:58 PM Arzak Khan <<a href="mailto:arzak@innovationforchange.net">arzak@innovationforchange.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:arial,sans-serif;font-weight:400;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;background-color:transparent;color:rgb(0,0,0)">The landscape surrounding OpenAI, particularly its governance and the broader implications of its work, is complex and contentious. OpenAI’s mission to ensure AGI benefits all of humanity is ambitious but fraught with inherent contradictions. Balancing rapid technological advancement with safety and ethical considerations is a formidable challenge, especially given the exponential pace of AI development. The recent shakeup at OpenAI, including the departure of key figures who prioritized safety, has raised significant concerns. The presence of influential individuals with strong political and intelligence ties only adds to the complexity, sparking debates about the true direction and control of the organization.</span></p><br><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style="font-size:11pt;font-family:arial,sans-serif;font-weight:400;font-style:normal;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;background-color:transparent;color:rgb(0,0,0)">While NGOs play a critical role in advocating for AI safety and ethical considerations, their success is tempered by the fast pace of technological development and the powerful commercial interests driving AI innovation with very little opportunities and resources for participation in AI development especially for organizations from Global South. </span></p></div><br clear="all"><br clear="all"><div><div dir="ltr" class="gmail_signature">Arzak Khan<br>Founder<br>(I4C) Center for Artificial Intelligence and Human Rights <br><a href="http://www.i4caihr.net" target="_blank">www.i4caihr.net</a></div></div></div><div dir="ltr" class="gmail_signature">X:@arzakkhan </div><div><br></div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 18 Jun 2024 at 5:20 AM, Kate Krauss <<a href="mailto:katiephr@gmail.com" target="_blank">katiephr@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-size:small">So OpenAI has a conflicted mission, a weak board, an insanely risky goal, and no accountability (am I missing something?). Oh right, their product is evolving at a million miles an hour. <br><br>They've shed many of the staff and board members who cared most about safety. </div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">Microsoft, their funder, could reign them in but it is motivated instead to egg them on. And now they've got a board member with very close ties to two US presidents and one of the world's most powerful spy agencies. The keys are on the table, as Juan Benet would say.</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">I don't think OpenAI could be getting more press coverage--the coverage has been near-constant and pretty responsible. </div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">Are the NGOs working on this having any luck? </div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">-Kate</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small"><br></div></div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Jun 16, 2024 at 12:27 PM Andrés Leopoldo Pacheco Sanfuentes <<a href="mailto:alps6085@gmail.com" target="_blank">alps6085@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">Sorry but “accountability” runs afoul of profit so many times, and the “mission” of OpenAI is DoubleSpeak:<div><br></div><div><span style="font-family:Söhne,sans-serif;font-size:22.68px;letter-spacing:-0.058968px;text-align:center;background-color:rgb(255,255,255);color:rgb(0,0,0)">OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity.</span></div><div><div style="text-align:center"><font face="Söhne, sans-serif" style="font-family:Söhne,sans-serif;color:rgb(0,0,0)"><span style="font-size:22.68px;letter-spacing:-0.058968px;font-family:Söhne,sans-serif;background-color:rgb(255,255,255)"><br></span></font></div><div><div dir="ltr">Regards / Saludos / Grato<div><br></div><div>Andrés Leopoldo Pacheco Sanfuentes</div><div>Pronouns: He/Him/They/Them (equal preference)</div></div><div dir="ltr"><br><blockquote type="cite">On Jun 16, 2024, at 10:52 AM, Kate Krauss <<a href="mailto:katiephr@gmail.com" target="_blank">katiephr@gmail.com</a>> wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr"><div dir="ltr"><div style="font-size:small">Hi,</div><div style="font-size:small"><br></div><div style="font-size:small">There is currently no accountability for the decisions at OpenAI, to my knowledge. What has to happen for that to change? The board is not working. </div><div style="font-size:small"><br></div><div style="font-size:small">How can the company be held accountable? I'm especially interested in the thoughts of policy people and lawyers on this list. And yes, choosing a spy chief for the board is a big red flag.</div><div style="font-size:small"><br></div><div style="font-size:small">Sincerely,</div><div style="font-size:small"><br></div><div style="font-size:small">Kate</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Jun 15, 2024 at 12:16 AM Sawsan Gad <<a href="mailto:sawsangad@gmail.com" target="_blank">sawsangad@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">Hello friends —</div><div dir="auto"><br></div><div dir="auto">I was so happy when Liberationtech was resurrected, and of course a former head of NSA on AI is something that needs to covered and discussed. </div><div dir="auto"><br></div><div dir="auto">However, I hope we’re not quickly degenerating into Trump-this Trump-that (and sensationalizing the title, only to realize the guy “was asked to continue under Biden” buried deep down inside). (!)</div><div dir="auto"><br></div><div dir="auto">Journalists may need to do this kind of (… work..?) to keep their jobs — god knows for how long. Normal people, not so much. </div><div dir="auto"><br></div><div dir="auto">People are working very hard to restore a basic level of trust among family and friends, after the several political and civil abuses of the last few years. Let’s please keep good spirits and stay relevant on the things that we all care about, and not assume political leanings of others, and that magic words will evoke certain reactions à la Pavlov. </div><div dir="auto"><br></div><div dir="auto">Now, back to discussing OpenAI. :)</div><div dir="auto">(Sorry Kate if that’s too forward. All respect to you, thank you for sharing the article). </div><div dir="auto"><br></div><div dir="auto">Sawsan Gad</div><div dir="auto">PhD student - Geoinformatics</div><div dir="auto">George Mason University</div><div dir="auto"><br></div><div dir="auto"><br><div class="gmail_quote" dir="auto"><div dir="ltr" class="gmail_attr">On Fri, Jun 14, 2024 at 8:05 PM Kate Krauss <<a href="mailto:katiephr@gmail.com" target="_blank">katiephr@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="font-size:small">Sam Altman, one of AI's most important leaders--at least for now--is a man with incredible contacts, wonderful social skills, and apparently few scruples. Appointing the former head of the NSA to OpenAI's board demonstrates that this company is unaccountable. This company puts Americans--and everybody else in the world--at risk.</div><div style="font-size:small"><br></div><div style="font-size:small">How can OpenAI be made accountable? The stakes are so high. Its board has already failed to contain it. </div><div style="font-size:small"><br></div><div style="font-size:small">Not even the worst part of this, but new board member Nakasone's hobby horse is that the US must out-compete China in generative AI.<br></div><div style="font-size:small"><br></div><div style="font-size:small">-Kate</div><div style="font-size:small"><br></div><div style="font-size:small">ps: What happens at OpenAI if Trump is re-elected?<br><br><b>Washington Post:<br><br>OpenAI adds Trump-appointed former NSA director to its board<br></b><br>Paul M. Nakasone joins OpenAI’s board following a dramatic shakeup, as a tough regulatory environment pushes tech companies to board members with military expertise.<br><br>By Cat Zakrzewski and Gerrit De Vynck<br>Updated June 14, 2024 at 12:16 p.m. EDT|Published June 13, 2024 at 5:00 p.m. ED</div><div style="font-size:small"><br></div><div style="font-size:small">The board appointment of retired Army Gen. Paul M. Nakasone comes as OpenAI tries to quell criticism of its security practices. (Ricky Carioti/The Washington Po</div><div style="font-size:small">OpenAI has tapped former U.S. Army general and National Security Agency director Paul M. Nakasone to join its board of directors, the continuation of a reshuffling spurred by CEO Sam Altman’s temporary ousting in November.<br><br>Nakasone, a Trump appointee who took over the NSA in 2018 and was asked to continue in the role under Biden, will join the OpenAI board’s Safety and Security Committee, which the company stood up in late May to evaluate and improve its policies to test models and curb abuse.<br><br>The appointment of the career Army officer, who was the longest-serving leader of U.S. Cybercom, comes as OpenAI tries to quell criticism of its security practices — including from some of the company’s current and former employees who allege the ChatGPT-maker prioritizes profits over the safety of its products. The company is under increasing scrutiny following the exodus of several key employees and a public letter that called for sweeping changes to its practices.<br><br>“OpenAI occupies a unique role, facing cyber threats while pioneering transformative technology that could revolutionize how institutions combat them," Nakasone told the Post in a statement. "I am looking forward to supporting the company in safeguarding its innovations while leveraging them to benefit society at large.”<br><br>Amid the public backlash, OpenAI has said it is hiring more security engineers and increasing transparency about its approach to securing the systems that power its research. Last week, a former employee, Leopold Aschenbrenner, said on a podcast that he had written a memo to OpenAI’s board last year because he felt the company’s security was “egregiously insufficient” to stop a foreign government from taking control of its technology by hacking.<br><br>Security researchers have also pointed out that chatbots are vulnerable to “prompt injection” attacks, in which hackers can break in to a company’s computer system through a chatbot that is hooked up to its internal databases. Some companies also ban their employees from using ChatGPT out of concern that OpenAI may not be able to properly protect sensitive information fed into its chatbot.<br><br>Nakasone joins OpenAI’s board following a dramatic board shake-up. Amid a tougher regulatory environment and increased efforts to digitize government and military services, tech companies are increasingly seeking board members with military expertise. Amazon’s board includes Keith Alexander, who was previously the commander of U.S. Cyber Command and the director of the NSA. Google Public Sector, a division of the company that focuses on selling cloud services to governments, also has retired generals on its board. (Amazon founder Jeff Bezos owns The Washington Post.)</div><div style="font-size:small"><br>Until January, OpenAI had a ban on the use of its products for “military and warfare.” The company says the prohibition was removed to allow for military uses that align with its values, including disaster relief and support for veterans.<br>“Our policies have consistently prohibited the use of our tools including our API and ChatGPT to ‘develop or use weapons, injure others or destroy property,’” OpenAI spokesperson Liz Bourgeois said. “That has not changed.” Nakasone did not respond to a request for comment.<br><br>Nakasone brings deep Washington experience to the board, as the company tries to build a more sophisticated government relations strategy and push the message to policymakers that U.S. AI companies are a bulwark against China.<br>“We want to make sure that American companies ... have the lead in the innovation of this technology, I think the disruptive technology of this century,” Nakasone said when asked about AI during a recent Post Live interview.<br></div><div style="font-size:small"><br></div><div style="font-size:small">--</div></div>
-- <br>
Liberationtech is public & archives are searchable. List rules: <a href="https://lists.ghserv.net/mailman/listinfo/lt" rel="noreferrer" target="_blank">https://lists.ghserv.net/mailman/listinfo/lt</a>. Unsubscribe, change to digest mode, or change password by emailing <a href="mailto:lt-owner@lists.liberationtech.org" target="_blank">lt-owner@lists.liberationtech.org</a>.</blockquote></div></div></blockquote></div></div></blockquote></div></div></div></blockquote></div></div><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div><div><blockquote type="cite"><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div class="gmail_quote" dir="auto"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
</blockquote></div></div>
</blockquote></div>
<span>-- </span><br><span>Liberationtech is public & archives are searchable. List rules: <a href="https://lists.ghserv.net/mailman/listinfo/lt" target="_blank">https://lists.ghserv.net/mailman/listinfo/lt</a>. Unsubscribe, change to digest mode, or change password by emailing <a href="mailto:lt-owner@lists.liberationtech.org" target="_blank">lt-owner@lists.liberationtech.org</a>.</span><br></div></blockquote></div></div></div></blockquote></div></div>
-- <br>
Liberationtech is public & archives are searchable. List rules: <a href="https://lists.ghserv.net/mailman/listinfo/lt" rel="noreferrer" target="_blank">https://lists.ghserv.net/mailman/listinfo/lt</a>. Unsubscribe, change to digest mode, or change password by emailing <a href="mailto:lt-owner@lists.liberationtech.org" target="_blank">lt-owner@lists.liberationtech.org</a>.<br>
</blockquote></div></div>
-- <br>
Liberationtech is public & archives are searchable. List rules: <a href="https://lists.ghserv.net/mailman/listinfo/lt" rel="noreferrer" target="_blank">https://lists.ghserv.net/mailman/listinfo/lt</a>. Unsubscribe, change to digest mode, or change password by emailing <a href="mailto:lt-owner@lists.liberationtech.org" target="_blank">lt-owner@lists.liberationtech.org</a>.<br>
</blockquote></div>