This week, deepfakes were deployed to influence a parliamentary election in Slovakia; researchers find ways to tamper with all kinds of visual watermarks, complicating tech companies’ safety efforts; and chaos in Congress means progress on AI legislation this year is likely up to the states.
Do you have a friend or colleague who might be interested? Forward this over and tell them to subscribe!
Deepfakes start ringing in voters’ ears
A secret recording between a political candidate and a journalist suggests that the candidate’s party has been rigging the elections in their favor, securing votes ahead of time and manipulating ballots at specific polling stations.
Is this a MAGA accusation coming in October 2024?
No. It was an actual deepfake recording that circulated ahead of Slovakia’s parliamentary election last week, part of a wave of propaganda that may have helped a pro-Russian party win control of parliament. The deepfake used synthetic audio of opposition candidates and circulated on Meta’s platforms and Telegram as late as the day before the election.
Deepfakes present a combination of technological, business, and regulatory challenges that are quickly becoming reality. In recent months, deepfakes have exploded as simple consumer-facing tools like HeyGen make it easy to create convincing synthetic video and audio from samples of a person’s voice. The Slovakian case appears to be the first time synthetic audio was used in a meaningful way in an attempt to directly influence an election, and while the impact on electoral results is unknown, it portends a deluge of media deepfakes across the internet and into elections in 2024.
We’ve written about the complexity of tackling image and video deepfakes, but audio deepfakes present unique and difficult challenges:
Audio deepfakes are easier and cheaper to create, and harder to control at generation time: Unlike convincing video deepfakes, generating audio doesn’t require massive computing power and is therefore easier to create with free, low-cost, or open source generative AI models. For video, it’s conceivable that in the short term, a flood of malicious deep fakes could be choked off by foundation model companies via watermarks or rejecting suspicious users. The same can’t be said for audio.
Audio deepfakes are more difficult to detect: Since audio by nature lacks the visual information of video, it’s more complicated for detection tools to distinguish between real and generated audio. For example, some deepfake image and video detection algorithms work by analyzing environmental noise signatures, which are not available for audio-only content.
Content provenance standards for audio are in their infancy: The C2PA, the coalition of tech companies creating technical standards to cryptographically log and display provenance information, actually does have some initial audio standards, but they’re an afterthought. There also don’t appear to be any C2PA member companies that deal with the audio processing toolchains, and with audio-only posts rare on social networks, there’s little incentive for distribution platforms to care about building safeguards.
Audio deepfakes can be distributed on more vulnerable channels: If social media companies ramp up detection and enforcement, audio deepfakes have another ripe target for massive distribution: voice calls, potentially even interactive ones.
Because voice calls are classified as a Title II telecommunications service (vs. Title I services like text messages and broadband), carriers can’t and don’t listen in. Specifically, the classification means they can’t filter or block calls based on suspicious content (or, hypothetically, AI audio watermarks). While the TRACED Act in 2019 gave the FCC and carriers new abilities to trace the origin of bogus calls, shut down numbers, and issue fines, these remedies wouldn’t stop an attack in real-time before causing havoc.
Generative audio is, to an extent, an amplifier of an existing problem. Political robocalls have long been used for nefarious purposes, like an infamous “push poll” from the George W. Bush campaign in the 2000 GOP Primary where voters in South Carolina reportedly were asked over the phone, "Would you be more likely or less likely to vote for John McCain for president if you knew he had fathered an illegitimate black child?"
Voters are even more likely to listen to (and be persuaded by) a call when they hear a well-known voice. What happens when a chaos agent or foreign actor, operating independently from a campaign, uses a synthetic Biden voice to tell supporters to vote on Wednesday when the election is actually Tuesday? Pairing old-fashioned robocalls with AI deepfakes could be a problem with little recourse to solve.
Technological solutions to deepfake detection are already hackable
With deepfake usage mushrooming, the White House, lawmakers, and tech companies are banking on watermarking – embedding information about the authenticity of a piece of media – as an essential strategy to fight it. It’s even one of the White House’s seven “voluntary commitments'' agreed upon with the major AI companies. However, a team of computer science researchers at the University of Maryland published new research this week showing that today’s watermarking methods can be defeated. They found that both visible and invisible watermarks were vulnerable to manipulation. In addition to removing watermarks invisible to the naked eye, they were also able to successfully add a fake watermark to a non-synthetic image, which would confuse social platforms’ detectors.
An arms race is coming. As we’ve written about, Google announced an advanced watermarking technology called SynthID in late August, content provenance standards like the C2PA will offer digital platforms a way to identify and down-rank synthetic media when appropriate, and other technologies will surely come. On the other hand, the attacks from the University of Maryland team are quite powerful without near-term countermeasures, and there are always simpler tactics available like printing out a generated image and taking a photo of it with a real camera.
Sophisticated bad actors will always find a way to stay one step ahead. However, watermarking and content distribution controls can still increase friction, making it substantially harder to distribute manipulated media on the most trafficked platforms.
The situation is a clear illustration of the need for policymakers to be agile and adapt legislation and regulation quickly in response to rapidly evolving technology – as well as partner closely with research institutions and public interest groups, rather than relying solely on tech companies, to advise and execute risk and mitigation strategies.
States regulate deepfakes while Congress falls into chaos
Watermarking and other controls offer social platforms spotty ways to limit deepfake distribution, but what about regulatory levers?
At the federal level, there are currently no requirements for disclosing or restricting the use of AI-generated content. The Federal Elections Commission (FEC) decided it did not have jurisdiction over AI in political ads, then changed its mind and declared that it might, but is far from making progress on specific regulation. To ensure the FEC has clear jurisdiction to act, Sen. Amy Klobuchar (D-MN) and allies were pushing ahead with the bipartisan Protect Elections from Deceptive AI Act and the REAL Political Ads Act, but with Kevin McCarthy’s ouster and another government shutdown looming next month, prospects for quick passage are in doubt.
This leaves matters to the states, which are moving ahead in a steady but patchwork fashion. Each state regulates its elections with its own set of rules, and deepfakes have become a popular theme for legislative drafters. According to the BSA | The Software Alliance, there are 37 deepfake bills circulating in state legislatures targeting either electoral or sexually exploitative deepfakes, with New York passing one such bill just this week.
In the states, some bills criminalize sexually exploitative material, others election-related deepfakes, and some both; some have criminal penalties, others civil; some require disclaimers on AI media, and some ban deceptive AI imagery outright. The speed of technological advancement may leave some states more at risk than others as laws are signed and enforced.
Of Note
Government
Key takeaways from POLITICO’s 2023 AI & Tech Summit (Politico) Lawmakers say they’re far from comprehensive action on AI; agencies may be the ones to tackle AI risk first; and DC is counting on companies to self-regulate for now.
POLITICO summit: Khan puts AI companies on notice (Politico) Lina Khan argues that the FTC can to take action against AI companies if they engage in monopolistic practices.
Pentagon Urges AI Companies to Share More About Their Technology (Bloomberg)
Elections
How AI could sway voters in 2024’s big elections (Chatham House) How AI could disrupt Taiwanese elections in January and American elections in November 2024.
How generative AI is boosting the spread of disinformation and propaganda (MIT Technology Review) “Researchers documented the use of generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.”
AIandYou plans AI deepfake ads to prepare voters for 2024 misinformation (Politico) A nonprofit wants to run deepfake videos to prepare voters for…more deepfake videos.
Technology
IBM Tries to Ease Customers’ Qualms About Using Generative A.I. (The New York Times) Matching Microsoft’s moves on indemnity, but not data origins.
Chinese self-driving car testing in California stirs controversy (NBC News) Self-driving car makers may become the newest target of restrictions on Chinese technology.
Meta, OpenAI Square Off Over Open Source AI (The Information) Where each major company stands on open sourcing advanced AI models.
Anthropic in Talks to Raise $2 Billion From Google and Others Just Days After Amazon Investment (The Information)
Deepfakes
Robin Williams’ daughter call AI recreations of father “disturbing”(San Francisco Chronicle)
Tom Hanks Warns of Dental Ad Using A.I. Version of Him (The New York Times)
Mr. Beast calls deep fakes a “a serious problem” (Mr. Beast)
Copyright
Anti-Piracy Group Takes AI Training Dataset 'Books3′ Offline (Gizmodo)
Facebook’s new AI stickers can generate Elmo with a knife (Ars Technica) Facebook’s new AI feature makes it easy to generate offensive and/or copyright-protected images of well known characters.