This week, Scott Wiener’s existential AI risk bill is in trouble; plus, the ongoing fight against explicit, non-consensual deepfakes.
Scott Wiener’s AI bill picks up federal scrutiny as deadlines approach
All politics is local, the famous phrase goes, but we bet that San Francisco’s State Senator Scott Wiener wasn’t expecting his California AI regulation bill to be potentially torpedoed by his home Congresswoman, Nancy Pelosi.
Wiener’s now controversial bill SB 1047 is winding its way towards passage in the California legislature, recently passing the Assembly’s Appropriations Committee. If it passes the Legislature, it’s up for Gov. Newsom’s signature or veto by Sept. 30. While we’d still bet on the bill making it to Newsom’s desk, his signature is in doubt after unusual intervention from a handful of Congressional Democrats – even though the bill has been heavily modified to appease industry complaints.
After industry lobbying in the spring failed to prevent SB 1047 from passing the State Senate, a more powerful opposition effort began a few weeks ago with an Op-Ed in Fortune from AI luminary (and Andreeseen Horowitz-funded entrepreneur) Fei-Fei Li. This was followed by a letter from eight California Democrats representing the Bay Area, Sacramento, and Orange County, and a statement from Nancy Pelosi, who took the unusual step to oppose the bill, citing the immaturity of the industry, the theoretical nature of the risk, and the potential to stifle innovation in California. Their objections used language similar to that of VCs like Andreessen Horowitz; while we have no special insight into the machinations behind the scenes, we’re comfortable assuming that a professional (and expensive) lobbying operation is behind a letter from eight senior House members and the former Speaker to oppose a state-level AI law.
Setting aside all-regulation-is-bad libertarians, AI experts are ultimately split in their view of whether or not the technology is mature enough to specifically regulate AI models themselves, as opposed to outcomes and activities resulting from use of the models. AI’s immense promise and commensurate risk is what motivated Wiener and his team to try to get regulation in place early, before any AI-driven catastrophe arrives.
However, in an attempt to be the first out the gate to encode very specific technical requirements in state law, the bill’s backers put a target on their back. Although intended as light touch regulation, opponents twisted those specifics to characterize the bill as a radical measure. Even after multiple rounds of revisions that accommodated many of the criticisms that had gone viral on social media, the labels stuck and opposition grew.
Ultimately, whatever savvy operation coordinated Congressional opposition to the bill may have succeeded in shifting the political calculus for Governor Newsom. While Newsom has a track record of leading on issues of national importance like LGBTQ+ rights, abortion and gun control, and despite favorable public opinion of SB 1047 in general, we’ve yet to see the groundswell of coordinated grassroots mobilization for AI model regulation that would be needed for high-profile leaders to use political capital and buck corporate lobbyists. Labor is the most natural powerful ally for AI regulation in California, but so far the unions are focused on bringing the fight to AI bills that more immediately impact their members.
More cynically, Governor Newsom has an eye on a future presidential run, and may not want to jeopardize his relationships with powerful national Democrats – and deep-pocketed Silicon Valley donors – over an issue that is difficult to trumpet on the campaign trail (“AI didn’t kill us all – thank me!”).
A more likely scenario now is that the bill is tabled – either by Sen. Wiener himself or by Governor Newsom through a veto. According to Bill Wong, a longtime strategist in Sacramento, “Governors have used vetoes and veto messages in the past to shape and direct changes to bill proposals that simply need more work. He could veto the bill and signal that he would be willing to consider signing a new bill introduced next year that has been further refined to better address the concerns expressed by the eight members of Congress.”
With a September 30th deadline, we’ll have an answer soon.
Politicians across the country take action on explicit deepfakes
Unlike the theoretical harms that Sen. Wiener’s bill seeks to prevent, explicit, non-consensual deepfakes are a clear and present danger – and over the past month there’s been a spate of bills and lawsuits targeting the issue at every level of government.
In San Francisco, City Attorney David Chiu brought a lawsuit against services that create non-consensual deepfakes – for example, by allowing users to upload a photo of a real person and use generative AI to “undress” the photo. His suit aims to shutter the services entirely.
In Tennessee, the Ensuring Likeness Voice and Image Security Act (ELVIS Act) expanded the state’s Protection of Personal Rights law to include generative AI services that can create unauthorized voice and image likenesses – something that might allow Taylor Swift to sue Donald Trump for his recent unauthorized use of AI generated images depicting Swift fans as Trump supporters.
And at the federal level, a bipartisan effort has taken shape in the form of the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, which would allow victims to sue over non-consensual deepfake images.
Big Tech has also made a push to tackle the problem on both their product and policy teams. Over the past few months, Google has modified their Search product to reduce the ranking of explicit deepfakes and have banned advertisers from promoting deepfake porn services, while Microsoft has urged lawmakers at both federal and state levels to pass laws that would penalize the creation and distribution of sexually explicit deepfakes.
Despite the multi-pronged approach, it’s far from a sure thing that these efforts will pay off. Many of the companies that Chiu’s lawsuit targets aren’t based in the U.S., and enforcing a court decision abroad might be impossible. And while the DEFIANCE Act has unanimously passed in the Senate and enjoys bipartisan support in the House, its prospects are still uncertain; the last major tech bill (the Kids Online Safety Act, or KOSA) that overwhelmingly passed the Senate is stalled in the House over free speech concerns.
Similar questions might ultimately foil Tennessee's ELVIS Act. Free speech advocates have already challenged the legitimacy of the law, citing concerns that the law is overbroad and would eventually fail a First Amendment challenge in court.
Even if these legal, political and jurisdictional issues can be overcome, making it more difficult for deepfake creation tools to find an audience and adding liability risk to their users and distributors, ultimately they won’t be sufficient to comprehensively address the problem.
Victims who pursue lawsuits against distributors of deepfake pornography (usually individual citizens) under laws with civil liability like the DEFIANCE Act will face long and expensive court cases, and in many cases individual perpetrators – who may be minors – lack meaningful assets to target. Ultimately, social media platforms have the greatest power to clamp down on the distribution of offensive or illegal deepfakes to large numbers of people. Although some tech companies are taking tentative steps to reduce spread of these images, as long as Section 230 is still the law of the land they won’t be subject to liability threats that would motivate meaningful changes.
We expect the policy and politics around nonconsensual deepfakes to be messy for years to come – a patchwork of laws with varying degrees of applicability and enforceability. In the meantime, deepfake quality will continue to advance, with an ever-increasing number of victims impacted by online humiliation from bogus pictures and videos.
Of Note
Campaigns, Elections and Governance
Schumer Optimistic About Passing Federal AI Regulation This Year (The Wall Street Journal)
Top labor leader says Gavin Newsom is ‘enamored’ with AI at the expense of workers (San Francisco Chronicle)
Editorial: Why California should lead on AI regulation (Los Angeles Times)
Tony Blair’s AI mania sweeps Britain’s new government (Politico)
Trump Promotes A.I. Images to Suggest That Taylor Swift Endorsed Him (The New York Times)
The Year of the A.I. Election That Wasn’t (The New York Times)
Democrats use AI in effort to stay ahead with Latino and Black voters (The Guardian)
Mayoral candidate vows to let VIC, an AI bot, run Wyoming’s capital city (The Washington Post)
Technology
This system can sort real pictures from AI fakes — why aren’t platforms using it? (The Verge)
McAfee Rolls Out Deepfake Detector in Lenovo's New Copilot-Plus PCs (CNET)
Perplexity AI to launch ads on search platform by fourth quarter (Reuters)
Opinion: The world-changing ‘killer app’ for AI could be nuclear fusion (The Washington Post)
No one’s ready for this (The Verge)
How A.I. Can Help Start Small Businesses (The New York Times)
Misc
Google agrees to America’s first newsroom funding deal. It’s already unpopular. (Politico)
Conde Nast Signs Multi-Year Deal with OpenAI (The Information)
Anthropic Hit With Copyright Suit From Authors Over Flagship AI (Bloomberg Law)
Workers at Google DeepMind Push Company to Drop Military Contracts (TIME)