The FEC deadlocks on AI in political ads
If AI deepfakes take over the 2024 election, can we count on the federal government to save us? The Federal Elections Commission last week deadlocked 3-3 on advancing Public Citizen’s petition to mandate the disclosure of AI-generated elements in political advertising. Public advocacy groups hoped this would be a starting point for curbing the spread of misinformation in the 2024 election, but the deadlock means the petition will not advance.
The vote broke along party lines, with Republican commissioners voting against the petition and the Democratic commissioners voting to advance it. Some of the Democratic commissioners expressed skepticism that the FEC has authority to regulate AI-generated content, but supported advancing the petition to a comment phase to gather public input. The FEC has long been criticized for being ineffective, but the gridlock has worsened: between 1975 and 2007, the rate of the agency’s commissioners deadlocking on decisions has quintupled, climbing to almost 25% between 2008 and 2019.
There are some efforts underway to broaden the FEC’s authority. In May, Congresswoman Yvette Clarke (D-NY) introduced a bill which would require a disclaimer for AI-generated content as well as close a disclosure loophole around certain types of digital advertising.
However, with a dysfunctional Congress and the election fast-approaching, it’s unlikely the FEC will be an effective cop on the beat. Other federal agencies will almost certainly continue to be hamstrung by the courts. State governments have an opportunity to regulate their elections, but there’s a gaping hole at the federal level.
And on a related note, if there’s one piece you read in it’s entirety this week, we recommend this fascinating, detailed account of how and why meaningful regulation of tech companies has failed to materialize, even after years of hand wringing about the potential harms of social media. We expect big tech to run the same playbook to try to quash legislation around AI.
More regulatory news
The FTC is investigating whether ChatGPT harms consumers (The Washington Post)
In contrast to the FEC’s ineffective response to the misinformation threat of AI, the Federal Trade Commission has opened a wide-ranging investigation into whether OpenAI’s ChatGPT product has violated consumer protection laws. This is the largest regulatory challenge to date for the newest slew of generative AI products, and will be a test as to how well existing consumer protection laws apply to AI.
FTC Commissioner Lina M. Khan, April 21, 2021. Graeme Jennings/Pool via REUTERS/File Photo
If the FTC finds that a company violates consumer protection laws, it can levy fines or put a business under a consent decree, which can dictate how the company handles data. The FTC has emerged as the federal government’s top Silicon Valley cop, bringing large fines against Meta, Amazon and Twitter for alleged violations of consumer protection laws. The FTC called on OpenAI to provide detailed descriptions of all complaints it had received of its products making “false, misleading, disparaging or harmful” statements about people. The FTC is investigating whether the company engaged in unfair or deceptive practices that resulted in “reputational harm” to consumers, according to the document.
America's first law regulating AI bias in hiring takes effect this [last] week (Quartz)
This [last] week, regulators from the [New York City’s] Department of Consumer and Worker Protection will start enforcing a first-of-its-kind law aimed squarely at AI bias in the workplace. The law requires more transparency from employers that use AI and algorithmic tools to make hiring and promotion decisions; it also mandates that companies undergo annual audits for potential bias inside the machine. Enforcement begins on July 5.
Congress wants to regulate AI. Big Tech is eager to help (LA Times)
Technology interests, especially OpenAI, the nonprofit (with a subsidiary for-profit corporation) that created ChatGPT, have gone on the offensive in Washington, arguing for regulations that will prevent the technology from posing an existential threat to humanity. They’ve engaged in a lobbying spree: According to an analysis by OpenSecrets, which tracks money in politics, 123 companies, universities and trade associations spent a collective $94 million lobbying the federal government on issues including AI in the first quarter of 2023.
Uncensored Chatbots Provoke a Fracas Over Free Speech (New York Times)
Several uncensored and loosely moderated chatbots have sprung to life in recent months under names like GPT4All and FreedomGPT. Many were created for little or no money by independent programmers or teams of volunteers, who successfully replicated the methods first described by A.I. researchers. Only a few groups made their models from the ground up. Most groups work from existing language models, only adding extra instructions to tweak how the technology responds to prompts.
U.S. Looks to Restrict China’s Access to Cloud Computing to Protect Advanced Technology (Wall Street Journal)
The Biden administration is preparing to restrict Chinese companies’ access to U.S. cloud-computing services, according to people familiar with the situation, in a move that could further strain relations between the world’s economic superpowers.
Nick Clegg: Openness on AI is the way forward for tech (Financial Times Op-Ed)
And like every technology, AI will be used for both good and bad ends by both good and bad people. The response to that uncertainty cannot simply rest on the hope that AI models will be kept secret. That horse has already bolted. Many large language models have already been open sourced, like Falcon-40B(opens a new window), MPT-30B(opens a new window) and dozens before them.
Campaigns & elections
The 2024 presidential race is the AI election (Axios)
Eric Schmidt, former Google CEO. Photo: Lucas Schulze/Sportsfile for Collison via Getty Images
The political world’s consultants and leaders are bracing for earth-shattering change from AI akin to what happened with social media. In private conversations, misinformation is definitely a concern, but it’s a known entity. We already live in a world Hunter Biden’s laptop is a critical issue for the GOP and non-existent on the left. Adding more disinformation surely won’t help, but people stopped believing everything “they” said long ago.
The real question is, will AI make enough of a difference to give one side the edge in an electoral contest that wasn’t possible before? At this point, the jury is still out.
Top technologists are portraying a dystopian landscape in 2024 in which misinformation and disinformation proliferate with a speed and ease that means "you can't trust anything that you see or hear," as former Google CEO Eric Schmidt puts it.
Mayor Suarez launches an artificial intelligence chatbot for his presidential campaign (AP)
“Hi, I’m AI Francis Suarez,” the bot says to introduce itself, its mouth moving in a way that’s not quite human. “You’ve probably heard that my namesake, conservative Miami Mayor Francis Suarez, is running for president. I’m here to answer questions you may have about Mayor Suarez’s proven agenda for economic prosperity, cutting spending and supporting our police. So, how can I help?”
Other notable news
Sarah Silverman Sues OpenAI and Meta Over Copyright Infringement (New York Times)
In our last newsletter we wrote about the beginning of what we expect to be a wave of class-action lawsuits around copyright infringement, and this week another big name added hers to two suits accusing OpenAI and Meta of illegally using her work to train their models. Sarah Silverman joins bestselling-authors Christopher Golden and Richard Kadrey in the two class-action suits. Since these suits take years to wind their way through courts we don’t expect resolution anytime soon, but the wave of lawsuits is likely to impact tech’s regulatory and legal strategies.
Inside the White-Hot Center of A.I. Doomerism (New York Times)
But the difference is that Anthropic’s employees aren’t just worried that their app will break, or that users won’t like it. They’re scared — at a deep, existential level — about the very idea of what they’re doing: building powerful A.I. models and releasing them into the hands of people, who might use them to do terrible and destructive things.
ChatGPT loses users for first time, shaking faith in AI revolution (The Washington Post)
Headline is a bit much, but nonetheless:
Mobile and desktop traffic to ChatGPT’s website worldwide fell 9.7 percent in June from the previous month, according to internet data firm Similarweb. Downloads of the bot’s iPhone app, which launched in May, have also steadily fallen since peaking in early June, according to data from Sensor Tower.
Humans may be more likely to believe disinformation generated by AI
Disinformation generated by AI may be more convincing than disinformation written by humans, a new study suggests. The research found that people were 3% less likely to spot false tweets generated by AI than those written by humans.
AI and the automation of work (Benedict Evans)
We know (or should know), empirically, that there always have been those new jobs in the past, and that they weren’t predictable either: no-one in 1800 would have predicted that in 1900 a million Americans would work on ‘railways’ and no-one in 1900 would have predicted ‘video post-production’ or ‘software engineer’ as employment categories. But it seems insufficient to take it on faith that this will happen now just because it always has in the past. How do you know it will happen this time? Is this different?