Incentives supersede safety at OpenAI and in the EU
This week, regulatory implications of the chaos at OpenAI; the EU AI Act is in jeopardy; and text-to-video is just around the corner.
Did a friend or colleague forward this to you? Welcome! Sign up above for a free weekly digest of all you need to know at the intersection of AI, policy, and politics.
What OpenAI’s boardroom drama means for AI policy
There’s been no shortage of ink spilled about the governance drama at OpenAI over the past week and a half, but in case you were prepping Thanksgiving stuffing instead of obsessively refreshing Twitter/X like the rest of Silicon Valley, here’s a brief recap. Over the course of 5 days, OpenAI, which is technically a nonprofit organization that also controls a for-profit subsidiary, fired its tech darling CEO Sam Altman for vague reasons; Altman then announced he was going to Microsoft along with other key OpenAI engineers and researchers; several days later he was reinstated as CEO but removed from the board; two board members then resigned and were replaced with former Salesforce co-CEO Bret Taylor and former Harvard president and Treasury Secretary Larry Summers (!).
Amid the drama, investors in OpenAI’s for-profit arm were up in arms over the nonprofit’s decision; employees threatened to quit en masse unless Altman was reinstated; and the tech community was sent into a tizzy over the possibility that the ouster was due to a technological breakthrough that the board worried was insufficiently safe to proceed with.
There are still big unanswered questions, but as we consider the policy implications of OpenAI’s turmoil, three things are clear:
Even an ostensibly safety-focused nonprofit cannot effectively self-regulate.
Altman himself began his testimony to Congress in May by heralding the nonprofit’s charter, stating that “The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.” Trust us, he intimated to regulators – but last week’s chaos dispelled any illusions that such a transformative and potentially dangerous technology can or should be regulated solely by AI companies themselves, even if they’re nonprofits or have safety-focused missions.
Whether it was a personal, ideological, or governance conflict at the root of Altman’s firing, the OpenAI board was not able to effectively manage the CEO’s behavior. The nonprofit’s charter didn’t hold up to the conflicting for-profit interests – from the employees who stood to earn tens of millions of dollars through an upcoming stock sale, to the VCs who invested millions of dollars foreseeing a future windfall, to Microsoft, who has invested $10B and was left on the sidelines while consequential decisions were made. The board structure and composition can be modified at any time, so the balance between safety and commercialization can easily tip (and probably already has) in favor of commercialization. The entire company was nearly absorbed wholesale into Microsoft, which is, obviously, a for-profit company with no stated safety mission.
All of these events call into question the value of OpenAI’s mission statement and nonprofit structure when it comes to the ability to guide the company’s actual behavior.
Lawmakers have been largely quiet on the issue, but in recent days a few of them have shared a similar sentiment. At the Axios AI+ Summit forum, Sen. Amy Klobuchar said: “I think it shows how fragile all of this is.… There's also a need to have some guardrails, because you never know at what moment who's going to be in charge of what, who's going to be making decisions about this incredibly powerful technology.”
OpenAI and Sam Altman’s brands are tarnished.
Sam Altman has unquestionably been the face of the industry as Congress and the administration attempt to wrap their arms around AI. Google, Microsoft, Anthropic, and other major tech companies have powerful lobbying networks, but Altman was showing up in D.C. and shaking hands, headlining private dinners and influencing the shape of AI regulation. Now that investigative reporters are digging into his behavior, the luster has faded, and while Sam may walk away from this episode with his reputation in Silicon Valley mostly intact, it’s hard to imagine that lawmakers won’t think twice about his trustworthiness.
There are also commercial implications. ChatGPT’s success and OpenAI’s rapid pace of deployment had made it the default language model for many developers and consumers. Now, any customer using LLMs in production has experienced the risk of OpenAI shutting down, which will prod them to diversify their dependencies, whether to another provider like Anthropic or an open-source model. The possible loss of market dominance may trim down the weight of OpenAI’s political influence.
Regulators need to stay focused on existing and near-term harms, but can’t neglect the possibility of transformative technology much sooner than they may have bargained for.
While executives and board members have denied the rumor that a safety disagreement was the proximal cause of Altman’s firing, it seems likely that there was a recent research breakthrough. We still don’t know the details, but some speculation is that OpenAI researchers had found a way to augment large language models’ understanding of words with new techniques that enable more complex reasoning, like thinking ahead a few steps in a game of chess to figure out the best way to win. The implication is that this approach, over time, would allow AI systems to pursue goals on their own without being explicitly directed by humans each step of the way.
Self-directed AI agents, with access to tools like an internet browser, represents a scenario that experts worry about, and one that policymakers are totally unprepared for. While regulators should focus on known harms, we are probably closer to these agents than lawmakers are bargaining for, and they would do well to start thinking about the implications and potential checks on their reckless deployment. For example, policymakers should establish that if a developer trains a model and lets it loose in the world to pursue some goal, that developer should be liable for any wayward harms caused by the AI agent; with such personal risk in mind, developers would be less inclined to treat safety as an afterthought.
The takeaway from the whole saga is that self-regulation, even for nonprofits, is unworkable. It’s probably a good thing that this conflict happened now, before the technology that OpenAI and other leading AI companies are building is more powerful and or dangerous. Conflicts between commercialization incentives and safety are too significant to ignore. Open source models aren’t the answer, as they come with their own set of problems. It’s essential that these companies have safety and alignment teams, but they’re insufficient on their own.
Government regulation is the only game in town when it comes to ensuring comprehensive safety for AI.
Landmark AI legislation is stalled in Europe
While the US press has been singularly focused on the drama at OpenAI, over in Europe progress on the EU AI Act has ground to a halt over pushback from tech firms and some governments on the proposed regulatory constraints on the largest foundation models. France, Germany, and Italy - the three largest economies in the EU and home to two of Europe’s most promising AI startups – Mistral.ai (France) and Aleph Alpha (Germany) -- are leading the charge to weaken the requirements in favor of self-regulation to boost their competitive advantage in the global AI race. This comes after months of intense lobbying from large tech companies like Meta, Alphabet (Google), Microsoft, and OpenAI against stringent regulation of the largest models.
Further delay could put the entire Act at risk as EU Council leadership rotates in January and a new EU parliament will be elected in June of next year. Once seen as the global leader on AI regulation, that reputation is in jeopardy now that pressures from lobbyists and economic and military incentives are becoming more obvious and compelling.
Text-to-video is coming
Social media was abuzz this week with demos of new generative AI video tools. Startup Pika made a splash this week with a slick announcement (linked above) of their text-to-video product, and Stable Diffusion released new tools that enable users to take photos and photorealistically “extend” the photo into a video for a few seconds, adding motion to real images.
We’ve written extensively about synthetic audio, which is already good enough to dupe listeners and much easier to create with existing tools. Creating convincing synthetic videos has been much more difficult thanks to huge compute requirements and the challenge of weaving together individual frames to create realistic visual action.
Amongst generative AI tools, synthetic audio continues to pose one of the biggest threats in terms of its potential for fraud and deception, but these releases illustrate how quickly video is catching up.
Of Note
Technology
How Jensen Huang’s Nvidia Is Powering the A.I. Revolution (The New Yorker)
Meta disbanded its Responsible AI team (The Verge)
Sports Illustrated Published Articles by Fake, AI-Generated Writers (Futurism.com)
Extracting Training Data from ChatGPT (Google DeepMind, University of Washington, Cornell, CMU, UC Berkeley, ETH Zurich)
Why Won’t OpenAI Say What the Q* Algorithm Is? (The Atlantic)
How Your Child’s Online Mistake Can Ruin Your Digital Life (The New York Times)
Government
State privacy watchdogs float new rules for consumer protection in AI (San Francisco Chronicle)
Next Senate AI forum will focus on IP and copyright (Axios Pro)
With Altman back at OpenAI, chaos leaves D.C. asking: Now what? (Politico)
Inside U.S. Efforts to Untangle an A.I. Giant’s Ties to China (The New York Times)
Senator: D.C.'s AI forums stay non-partisan, for now (Politico)