This week, SB 1047 passes the California Legislature, and lessons from its journey; plus, all the other California bills that Twitter hasn’t noticed yet.
All eyes on Newsom
As anticipated, California State Senator Scott Wiener’s controversial AI risk bill SB 1047 passed the Legislature and is headed to Governor Newsom’s desk for signature or veto by the end of September. SB 1047 is a trailblazer in AI regulation, and there are a few lessons to draw from its legislative journey.
The structural politics in the California Legislature support robust AI regulation. The issue polls well with voters, Big Tech’s reputation is poor, and tech jobs are highly concentrated in just a few California districts. For all the other legislators, their incentives are to show action to protect jobs in their district, not the richest corporations in the world. Sen. Wiener embraced an open legislative process for this bill, introducing early drafts and gradually updating them, evolving SB 1047 from an “intent bill” in September 2023 into a substantive, thoughtful attempt at regulating a complex issue with potentially extreme risks.
In mid-2023, after the shock of ChatGPT’s initial release, a clamor for government regulation emanated from many circles – academia, tech luminaries, and NGO leaders. Sam Altman was on the Hill in DC, asking to be regulated, and AI safety organizations associated with the effective altruist movement cropped up. In that environment, an “open draft” approach made sense – everyone seemed to agree that something needed to be done, they just needed to work through the details.
The politics changed quickly. Libertarian-leaning, pro-technology “effective accelerationists” sprung up on Twitter/X and quickly made SB 1047 a target of online vitriol, taking advantage of technical drafting issues and policy provisions that hadn’t been fully thought out. Not expecting this avalanche of opposition, Sen. Wiener found himself without a motivated political coalition needed to parry critics, who were quickly organizing and using both grassroots and inside lobbying tactics to try to sink the bill. Indeed, this combination of grassroots tech activism and corporate opposition became formidable.
These forces created some strange political bedfellows. Speaker Emerita Nancy Pelosi, rumored to have designs for her daughter to take over her Congressional seat – a seat which Sen. Wiener also covets – vocally opposes the bill. Democrats’ longtime antagonist Elon Musk supports the bill, apparently genuinely worried about AI risk. Longtime Sen. Wiener ally San Francisco Mayor London Breed also has come out in opposition, nudged by tech and real estate interests who fear the bill will precipitate an AI industry exodus from her city.
We predict that Gov. Newsom will table the bill with a veto. However, this is just the first inning for AI regulation in California. There are dozens of AI-related bills currently pending in the Legislature, and given the structural reasons outlined above, we expect many of them to make it to the Governor’s desk. The challenge for the pro-AI regulation camp will be winning the support of Newsom (and future governors), who will be more tuned in to statewide concerns like the potential economic impact of large companies leaving the state. At the risk of being labeled a Big Tech stooge on a future Democratic primary debate stage, the Governor can’t veto every AI risk bill forever, and a constant stream of bills will pressure Newsom to sign at least one or two.
On the other side, “effective accelerationists” and corporate AI policy honchos will soon need to start talking about the benefits of AI for the human constituents of legislators representing places like Cerritos, Stockton, and San Luis Obispo – or at least start to push some campaign cash their way.
And if the bill passes? We feel confident in saying it will have little negative impact on AI innovation — and may just reduce the likelihood of a catastrophic scenario.
On balance, Sen. Wiener deserves credit for his well-intentioned and dogged approach to tackling the issue. He may lose the battle but has done more to catalyze a meaningful discussion of how AI should (and should not) be regulated than any other public figure to date, and we hope he continues to push the conversation forward.
The California legislation that Twitter forgot
With only days left in the legislative session and all eyes on the final fate of SB 1047, one could be excused for thinking that’s the only AI legislation under consideration. However, the California legislature is reviewing more than thirty(!) other AI-related bills this session, ranging from autonomous vehicles to deep fakes.
We’re keeping tabs on a few of the more interesting and potentially impactful bills, including:
AB 3211: California Digital Content Provenance Standards. This bill would require developers of generative AI tools to apply provenance information to synthetic content. It would also require that new recording devices such as cameras or phones sold or distributed in California give users an option of applying provenance data to authentic images captured with that device.
Unlike SB 1047, AB 3211 has support from OpenAI, Microsoft, and Adobe, who are all members of the Coalition for Content Provenance and Authenticity (C2PA), an industry group building a cryptographically verifiable standard for content authenticity. It’s opposed by other industry groups like TechNet and NetChoice.
AB 3211 passed the Assembly in an unanimous vote and is now up for a full vote in the State Senate after clearing committee hurdles.
AB 2602: Contracts against public policy: personal or professional services: digital replicas. This bill requires performers’ contracts to explicitly authorize the use of AI replicas and requires these contracts to be reviewed on the performer’s behalf by an attorney or a labor union. It restricts the use of broad rights clauses that could be leveraged to provide AI replica rights alongside other permissions. Unsurprisingly, the actors’ union SAG-AFTRA is one of the bill’s biggest champions.
This bill overwhelmingly passed the State Senate on Tuesday and now awaits approval (or a veto) from Gov. Newsom.
AB 2655: Defending Democracy from Deepfake Deception Act of 2024. This bill would require “large online platforms” to block the posting of materially deceptive content related to elections in California, during specified periods before and after an election.
Many states have banned or restricted the use of synthetic content or deepfakes in election communications, including California, but this bill would put the onus on platforms to take action instead of the person or campaign posting the content. While the legislation stops short of mandating that platforms actively identify deepfake content, it requires platforms provide tools to allow California residents to report content that violates the law. If passed, AB 2655 will certainly attract robust court challenges like similar social media “deplatforming” laws passed in other states.
AB 2655 also passed the State Senate on Tuesday and now awaits approval (or a veto) from Gov. Newsom.
Even if SB 1047 gets vetoed, its proponents may find some comfort that all of the incoming fire on the bill provided air cover for other AI legislation to move forward with less scrutiny.
Of Note
Campaigns, Elections and Governance
Social platform X edits AI chatbot after election officials warn that it spreads misinformation (AP)
How Utah and Texas became the face of political deepfakes ahead of the 2024 election (Fast Company)
Silicon Valley Is Coming Out in Force Against an AI-Safety Bill. California State Senator Scott Wiener responds to his many critics. (The Atlantic)
The Mayor of London Enters the Bullshit Cinematic Universe (Wired)
Technology and Business
AI Regulation Is Coming. Fortune 500 Companies Are Bracing for Impact. (The Wall Street Journal)
China’s AI Engineers Are Secretly Accessing Banned Nvidia Chips (The Wall Street Journal)
Chip challengers try to break Nvidia’s grip on AI market (Financial Times)
This Code Breaker Is Using AI to Decode the Heart’s Secret Rhythms (Wired)
Nvidia earnings now rival US jobs report for impact on markets (Financial Times)
How Will Self-Driving Cars Learn to Make Life-and-Death Choices? (The Wall Street Journal)
OpenAI Races to Launch ‘Strawberry’ Reasoning AI to Boost Chatbot Business (The Information)
Arizona Deal Latest Sign of Booming Demand for Sites to Power AI (The Wall Street Journal)
OpenAI in talks to raise fresh funds at a valuation of more than $100bn (Financial Times)
OpenAI says ChatGPT's weekly users have grown to 200 million (Reuters)
Misc
AI Doomers Had Their Big Moment - Did They Waste it? (The Atlantic)
‘Anyone Can Be a Victim’: Sprawling AI Fake Nudes Crisis Hits South Korea (The Wall Street Journal)
1 in 10 Minors Say Their Friends Use AI to Generate Nudes of Other Kids, Survey Finds (404 Media)
Welcome to Fakesville: Inside an AI Nightmare that Tore Apart a School (The Information)
Inside Universities’ Love-Hate Relationship With ChatGPT (The Wall Street Journal)
The World’s Call Center Capital Is Gripped by AI Fever — and Fear (Bloomberg)