OpenAI on a political charm offensive, and a fake fire at the Pentagon
May 26, 2023: Sam tries to charm the humans
Our take
With broad recognizance that something should be done about regulating AI, but broad confusion about what to do, OpenAI is notably going on the political offensive this week following Sam Altman’s seemingly successful testimony to the US Congress last week. He was in South America, Africa, and met with most major heads of state in Western Europe. OpenAI is pushing for a regulatory agency seeded with their own ideas, which carries with it the critique that such an agency will end up creating regulatory capture where only larger companies like OpenAI, Microsoft, and Google will have the resources for compliance. The current policy vacuum, speed of technological development, and lack of deep tech expertise amongst many policy makers means this is a real risk.
Meta and others are taking a different strategy, pushing “open source” models that can — in theory — be run and adapted by individuals and smaller companies, and would be harder to regulate. However, these companies haven’t figured out how to benefit from open source, so it remains to be seen how long they’ll support this approach.
News
OpenAI on the political offensive
In reaction to the pending moves on AI regulation, Sam Altman was in Europe, visiting with Germany’s chancellor, Spain’s Prime Minster, France’s President, the Polish Prime Minister, and the UK Prime Minister. Whew, and that’s not even counting his prior week visiting with non-heads of state in Toronto, Rio de Janiero, Lagos, and Lisbon.
OpenAI says it could ‘cease operating’ in the EU if it can’t comply with future regulation (The Verge, 5/25/23)
“The details really matter,” said Altman, according to a report from The Financial Times. “We will try to comply, but if we can’t comply we will cease operating.” A day later, Altman tried to temper his initial comments, saying OpenAI had productive conversations related to AI regulation in Europe and “of course, have no plans to leave.”
Sam Altman visits with French President Macron, Macron releases photo (Twitter, 5/23/23)
Developing talents and technologies in France, acting for regulation at the French, European and global levels, these are our priorities in terms of artificial intelligence. We discussed this with Sam Altman, the creator of ChatGPT.
Concurrently, OpenAI released initial stabs at regulatory frameworks (that would apply to them!)
Governance of superintelligence (OpenAI, 5/22/23)
Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.
Grants to develop “Democratic Inputs to AI (OpenAI, 5/25/23)
“…a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.”
Microsoft leaps into the AI regulation debate, calling for a new US agency and executive order (CNN, 5/25/23)
In a speech in Washington attended by multiple members of Congress and civil society groups, Microsoft President Brad Smith described AI regulation as the challenge of the 21st century, outlining a five-point plan for how democratic nations could address the risks of AI while promoting a liberal vision for the technology that could rival competing efforts from countries such as China.
The remarks highlight how one of the largest companies in the AI industry hopes to influence the fast-moving push by governments, particularly in Europe and the United States, to rein in AI before it causes major disruptions to society and the economy.
Disinformation IRL
AI-generated photo of fake Pentagon explosion sparks brief stock selloff (New York Post, 5/22/23)
A computer-generated image purporting to show a fire near the Pentagon was shared by Russian state media outlet RT, then shared by Zerohedge, causing a small dip in the S&P 500 and DJIA before recovering once the photo was discredited.
This image could have easily been created in photoshop prior to generative AI, but it caused a major disruption to the financial system and illustrated the chaos tools that can churn these out by the billions can cause.
AI presents political peril for 2024 with threat to mislead voters (AP, 5/14/23)
“Here are a few: Automated robocall messages, in a candidate’s voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.”
60 Minutes demonstrates AI-enabled social engineering attack (60 Minutes, 5/21/23)
As part of a segment on elder abuse and fraud, 60 Minutes hired Proof Point Security to successfully trick an employee into giving out a passport number, live on camera. Worth noting is that the quality of the AI voice was not particularly good, but it was close enough when paired with a spoofed caller ID.
Bias
ChatGPT shows bias in describing Howard students vs Harvard (Textio, 5/18/23)
Textio, a company that aims to help reduce written bias in the workplace, aims to show that subtle bias in LLM output can be just as important as more purposeful misinformation.
But taken as a set, the bias is clear. Howard alumni are expected to be creative and have a passion for diversity and inclusion, but not necessarily to have analytical skills. Harvard alumni are expected to have analytical skills and strong attention to detail, but to struggle with working inclusively.
Google’s Photo App Still Can’t Find Gorillas. And Neither Can Apple’s. (NY Times, 5/22/23)
These systems are never foolproof, said Dr. Mitchell, who is no longer working at Google. Because billions of people use Google’s services, even rare glitches that happen to only one person out of a billion users will surface. “It only takes one mistake to have massive social ramifications,” she said, referring to it as “the poisoned needle in a haystack.”
US Government
The White House wants your opinion on how to regulate AI (Yahoo News, 5/23/23)
The Biden administration is looking to create a new national strategy on artificial intelligence, but first it’s asking the public to weigh in on some of the thorniest issues related to the fast-evolving field. In “order to seize the opportunities AI presents, we must first manage its risks,” the White House says in a fact sheet announcing the effort. Through July 7, the administration is looking for input on a long list of questions, including what safeguards are necessary to protect individual rights as AI advances, what its impact on Americans’ jobs might be, and how the tech can be used to improve government services.
Congress Really Wants to Regulate A.I., But No One Seems to Know How (The New Yorker, 5/20/23)
In reaction to a proposed regulatory agency:
As Clem Delangue, the C.E.O. of the A.I. startup Hugging Face, tweeted, “Requiring a license to train models would . . . further concentrate power in the hands of a few.” In the case of OpenAI, which has been able to develop its large language models without government oversight or other regulatory encumbrances, it would put the company well ahead of its competitors, and solidify its first-past-the-post position, while constraining newer entrants to the field.
A tale of two AI futures (op-ed, Aram A. Gavoor, The Hill, 5/21/23)
The best course is for Congress to keep its options open — to resist the impulse to delegate permanent authority to executive branch experts who simply do not exist right now. Instead, it should focus on maintaining structural constraints with a biannual reauthorization requirement for the new agency that regulates AI, requiring robust reporting and congressional oversight. Congress must employ its political will to set clear guardrails for how such an agency will oversee, enforce, and report on the executive branch’s use of AI, in addition to the use of AI by the private sector.