This week, the political fight behind SB-1047; plus, how AI is being used by conspiracy theorists to thwart elections.
Programming note: Alex is back from maternity leave and AIPP will return to its regular cadence!
The political fight behind the most controversial AI bill yet
For AIPP’s return edition, we’re analyzing the political fight over a recent AI bill in California that has captured the attention – and furor – of Silicon Valley.
In October 2023, California State Senator Scott Wiener introduced the initial draft of a bill that subsequently became the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB-1047). The bill aims to regulate the largest AI companies, introducing liability for extreme (and, at present, theoretical) harms from large AI models. SB-1047 would also require large AI labs to implement reasonable safety processes before releasing large models to the public.
Companies like OpenAI claim they already enforce safety checks, but recent reporting suggests they sometimes bypass their internal safety processes and do not disclose the entirety of potential risks new models might cause. A law enforcing a consistent set of standards that every company must follow would level the playing field, making it more difficult for an aggressive executive team to discard safety standards in pursuit of growth and profit.
Tech industry lobbyists opposed the bill from the start, arguing it would create regulatory uncertainty and high compliance costs, impractical standards, and over-broad liability that would crush innovation and bury startups in paperwork. There are certainly components worthy of robust discussion within SB-1047. Should liability for harms fall on foundation model developers or the end users of the AI models themselves? Should we create a whole new bureaucracy as a response to harms that are still theoretical? How will regulations crafted today play out in the future as it becomes easier to develop powerful models? Is a state-owned computing cluster, funded by the bill, a good use of taxpayer dollars?
The opportunity for a measured policy debate was quickly buried in angry tweets. The Silicon Valley rank-and-file, usually politically averse, was stirred to action when a Twitter/X post went viral, claiming SB-1047 was being “fast tracked” to squash AI innovation and harm startups. The misleading claim sparked a firestorm of opposition from anti-regulation voices, led by key players at VC firm Andreessen Horowitz. The firm, one of the most well-known in Silicon Valley, has become increasingly politically active over the past year. Andreessen claims it’s advocating for “Little Tech” – startups, and the firm’s substantial investments in them – when it opposes regulations touching crypto, antitrust, fintech, and AI. In the case of SB-1047, partners at the firm and their allies attacked the bill in op-eds and on Twitter/X, and kicked off an astroturf campaign to pressure legislators to vote against it.
Many of these outraged pundits seem to have a dubious grasp on specific legal concepts in the bill, including the liability AI companies already face under existing law and the effective safe harbor provided in the bill to AI companies if they perform reasonable safety testing. Critics also conflate the severe consequences for willfully lying on compliance forms with the more moderate civil penalties prescribed by the bill for safety violations, whose enforcement would be up to the California Attorney General, and who does not file suit for every violation.
Eventually, the more substantial criticism prompted Sen. Wiener to change parts of the bill that had the potential to impact startups, including setting many of the bill’s provisions to trigger only when a model’s training run costs more than one hundred million dollars. So far, Silicon Valley is unmollified.
But we predict that Andreessen and its allies will eventually lose the SB-1047 fight for a simple reason: the idea of regulating AI is and has been broadly popular with voters themselves, and California state politics are structurally supportive of AI regulations.
Let’s start with public opinion. Recent polling in California, released by the bill’s proponents but performed by a highly reputable research firm, showed overwhelming, bipartisan public support for the kinds of regulations proposed in SB-1047.
Equally importantly, in California, organized labor often steers the policy agenda. Donations to many legislators are dominated by organizations like teachers’ unions, the SEIU, and the Teamsters – many of whom have a vested interest in regulating tech. And unlike VCs, labor actually drives voters to the polls.
While there are plenty of tech workers concentrated in a few California districts, AI companies aren’t major employers in most areas of the state. Legislators won’t be at risk of losing an election if they vote for SB-1047, and they will likely burnish their reputation with their labor backers and constituents back home by talking about how they’re protecting jobs by clamping down on AI.
As of this writing, the bill has overwhelmingly passed the State Senate 32-1, and needs to pass a vote in the Assembly by August 31st to advance to Governor Newsom’s desk. Given the lopsided Senate vote, it’s reasonable to assume the bill will advance.
Andreeseen Horowitz and Libertarian-leaning techies on social media are soon likely to get a painful lesson in California politics. To get their way on AI policy, they need to do more than make noise on Twitter/X; they need to make a holistic, compelling case for why AI benefits the constituents and businesses in districts throughout California, and why AI is a technology to be embraced, not feared.
An “AI supercomputer” is propping up election deniers in Washoe County
A new election watchdog is on the case looking for voter fraud in Reno, and its name is ChatGPT.
Or so says Robert Beadle, a cryptobro, MAGA activist, and perennial election denier whose 10,866-word ChatGPT-written, conspiracy-laden “analysis” was used as justification to pressure the Washoe County Commission into refusing to certify a recent local election. It’s the latest in a years-long campaign by right-wing interests to delegitimize elections in one of Nevada’s two Democratic-leaning and most populous counties, the kind of place that might determine a key swing state’s electoral college votes in November.
Beadle, who has previously harassed election officials and pressured them into resigning in favor of right-wing commissioners, was also responsible for forcing the recount. Now he’s using AI as a smokescreen, prompting ChatGPT to support, verify, and help rewrite his nonsensical analysis that there was vote fraud with “99.9% certainty”.
Throughout Beadle’s document, he uses equations fed to him by ChatGPT and cites conversations with the chatbot in which it appears to reassure him of the strength of his analysis. ChatGPT’s responses are obsequious and not particularly knowledgeable about quantitative analysis or grounded in basic information about expected voting patterns. This wouldn’t surprise anyone familiar with the limitations of large language models, since LLMs are tuned to predict the next few words most likely to satisfy their users. Beadle uses this to his advantage, cloaking otherwise unintelligible arguments in sophistry and adding unwarranted credibility to his claims.
While this bogus analysis was just one of many pressures that led to the commission’s vote against certifying the election, it represents a new and insidious way that AI is impacting elections. Now any “activist” can use publicly available AI tools to create mounds of legitimate-seeming analyses that bog down officials, confuse the public, and provide convenient justification for malicious actors to interfere with elections.
Of Note
Campaigns, Elections and Governance
The AI-focused COPIED Act would make removing digital watermarks illegal (The Verge)
Starmer plans to introduce AI bill in King’s Speech (Financial Times)
The AI Act compliance countdown begins (The Verge)
Red Teaming Isn’t Enough (Foreign Policy)
Technology
AI Needs Copper. It Just Helped to Find Millions of Tons of It. (The New York Times)
Goldman Sachs: AI Is Overhyped, Wildly Expensive, and Unreliable (404 Media)
Microsoft Quits OpenAI’s Board Amid Antitrust Scrutiny (The Wall Street Journal)
Google’s Nonconsensual Explicit Images Problem Is Getting Worse (Wired)
Venture’s Big Bet on Open-Source LLMs (The Information)
Why The Atlantic signed a deal with OpenAI (The Verge)