OpenAI’s hidden alliance with DC nonprofits
This week, an OpenAI lobbyist rallied support from public interest groups to oppose copyright reform; the FCC takes note of deepfake audio and text messages; and Sen. Klobuchar says there’s a “good chance” some election-related AI bills will pass before the new year.
Did a friend or colleague forward this to you? Welcome! Sign up above for a free weekly digest of all you need to know at the intersection of AI, policy, and politics.
Who’s standing up for the artists?
This week, Politico published a bombshell story about how an OpenAI lobbyist has corralled a coalition of academics, think tanks, and public interest groups to urge Congress to avoid passing any additional copyright regulations for AI companies. Notable public interest signatories of the letter include two of the most well-known technology public interest advocates, Electronic Frontier Foundation (EFF) and Public Knowledge.
While staunch advocates for net neutrality and longtime opponents of telecom monopolies, the EFF and Public Knowledge have sided with big tech companies on internet policy issues of late - first opposing Section 230 reform, and now supporting AI models’ abilities to ingest unlimited amounts of copyrighted information. By co-opting or otherwise convincing these public interest groups to adopt a corporate policy agenda, there is now scant organized public interest opposition that can whisper in the ears of lawmakers and effectively lobby against companies whose core business model relies on ingesting and distributing massive amounts of data.
Unfortunately, that doesn’t bode well for consumers and creators in the AI era.
The FCC is paying attention to deepfake audio
Last week, we wrote about how the Federal Communications Commission (FCC) should get ahead of the threat of deepfake calls in the 2024 election. This week, FCC Chairwoman Jessica Rosenworcel announced that they will do just that. She proposed a Notice of Inquiry to investigate if and when existing and future AI technologies will fall under the Telephone Consumer Protection Act (TCPA) and look into how they might use AI technologies to combat spam.
We’re puzzled by one element of the Notice of Inquiry, though: if taken up, the proposal will also “consider ways to verify the authenticity of legitimately generated AI voice or text content from trusted sources”. While this could potentially reference scenarios akin to NYC Mayor Adams' sanctioned deepfakes that we highlighted last week, the language isn’t clear, and the idea that the FCC would weigh in on what is “legitimate” generation of AI is concerning.
Content-based restrictions will wade into a morass of First Amendment objections and practically guarantee lawsuits against the FCC. It would be safer and more effective for the Commission to simply claim that all deepfake audio constitutes an “artificial voice” under the TCPA without examining the underlying purpose.
Should the Commission decide to proceed with the Notice of Inquiry, Chairwoman Rosenworcel will determine its final scope and timeline. It remains to be seen whether or not we’ll have clarity before the 2024 elections.
Sen. Klobuchar is hopeful federal election AI rules pass by end of year
One straightforward way to mitigate the impact of AI-generated imagery in elections is to require disclosures in advertisements. The idea is bipartisan and broadly popular. Nonetheless, the federal government has been slow to move ahead with such measures thanks to indecision at the FEC and paralysis in the House. States like Texas, Minnesota, and others are moving forward in a patchwork fashion to mandate disclosure of AI-generated campaign advertising or prohibit the use entirely, and Michigan lawmakers look to be joining them soon.
On the federal side, Sen. Amy Klobuchar indicated in a Bloomberg interview last week that she believed some of the election provisions she’s proposed recently will be passed as part of year-end 2023 bills. This could include all or parts of three different bills she has introduced this year, most with strong bipartisan support:
The REAL Political Ads Act, which would require a disclaimer on ads that use AI-generated images or video.
The Protect Elections from Deceptive AI Act, which would prohibit the distribution of materially deceptive AI-generated media relating to federal candidates, in certain issue ads that aim to influence a federal election, and would carve out protections for parody, satire, and the use of AI-generated content in news broadcasts.
The Honest Ads Act focuses a bit less directly on AI but closes a loophole in online advertising disclaimer requirements.
Many obstacles remain, but it’s a signal of progress for the Congress’s first substantial regulatory measure, and one desperately needed ahead of 2024.
Of Note
Long-term safety
White House to unveil sweeping AI executive order next week (The Washington Post) The forthcoming order is expected to be the administration’s most meaningful attempt to date to regulate AI.
Managing AI Risks in an Era of Rapid Progress (Yoshua Bengio, Geoffrey Hinton et al.) Two godfathers of AI raise another alarm about the rapid pace of advancement and call for “urgent governance measures”.
Prime Minister Rishi Sunak makes a speech on the risks and opportunities of AI (Gov.uk) UK PM Rishi Sunak announces the “world’s first AI safety institute”.
Mustafa Suleyman and Eric Schmidt: We need an AI equivalent of the IPCC (Financial Times)
Copyright
Music publishers are suing Anthropic for training its AI model on their song lyrics (Fast Company) Three of the largest publishers - Universal Music Group, Concord Music Group and ABKCO - are suing Anthropic, accusing the AI company of training their model Claude on song lyrics from their catalogs. They go one step further than the OpenAI lawsuit, claiming that Anthropic has also profited from incorporating these outputs in their commercially available APIs.
This new data poisoning tool lets artists fight back against generative AI (MIT Technology Review)
Autonomous vehicles
The California DMV has suspended Cruise robotaxi’s permit to operate in San Francisco, citing safety concerns following a number of incidents. Waymo is still operating, for now.
Inside the Cruise crash that got the robotaxis pulled from S.F. Was there a coverup? (San Francisco Chronicle)
Calif. DMV says Cruise hid video of SF crash, suspends driverless car firm's permits (SF Gate)
Cruise’s Driverless Taxi Service in San Francisco Is Suspended (The New York Times)