Taylor Swift vs. deepfakes
This week, Taylor Swift and many others find no immediate remedy against non-consensual deepfakes, and what Congress might do about it.
Programming note: we’ll be taking a break for a few weeks. See you soon!
Will Taylor Swift get Congress to finally act on AI?
In recent days, Twitter/X has been flooded with pornographic deepfake images of Taylor Swift. Legions of Swifties are up in arms, as is the White House. While Taylor Swift may be the highest profile victim of non-consensual synthetic pornography, New Jersey high schoolers and many other non-celebrities have also been victimized as AI image generation tools have become more powerful and proliferated.
Currently, the remedies available to Swift and other victims range from dissatisfying to non-existent, but recent activity in Congress may change that soon.
While there are a handful of state laws on the subject, there is currently no federal law that deals with nonconsensual deepfake porn. Congressman Joe Morelle (D-NY) introduced the Preventing Deepfakes of Intimate Images Act in the last Congress, but the House Judiciary Committee declined to take it up. However, Rep. Morelle reintroduced the bill a few weeks ago with 26 cosponsors, including Republican Rep. Tom Kean, Jr. from New Jersey, and receiving support from the high school victims in New Jersey and their families.
Over in the Senate, this week a bipartisan group of lawmakers on the Senate Judiciary Committee, including Dick Durbin and Lindsey Graham, introduced the DEFIANCE Act to “hold accountable those responsible for the proliferation of nonconsensual, sexually-explicit ‘deepfake’ images and videos,” mentioning Taylor Swift specifically in the bill’s press release.
While the text of the DEFIANCE Act was not available as of this writing, it appears to create a civil remedy for victims of deepfake porn – that is, victims should be able to successfully sue the individuals responsible for creating a deepfake with their likeness, as well as anyone who continues to distribute the content if they are aware of its origins. The House bill also includes a civil liability provision but goes a step further, introducing a new criminal penalty of imprisonment for those who create and spread “an intimate digital depiction.”
This fairly surgical targeting of individual creators and distributors, paired with bipartisan support in both the House and Senate, makes deepfake porn liability a good candidate for actual passage in a Congress desperate to show competence and relevance when it comes to technology. Tellingly, the House bill includes a specific liability shield for tech platforms who act “in good faith to restrict access to or availability of intimate digital depictions.” No doubt this provision is designed to blunt any opposition from the powerful tech lobby, who should be eager to endorse any legislation that steers accountability away from tech companies’ content moderation failures and towards individual bad actors.
Would deepfake civil liability for individual creators do anything to stop the tide of non-consensual deepfake porn? According to Arsen Kourinian, a partner at Mayer Brown specializing in AI and Privacy, the threat of civil liability probably wouldn’t deter the types of perpetrators likely generating a lot of this content – namely, individuals – many without significant assets to speak of.
“I think the lawsuit route is going to be pretty ineffective,” Kourinian explained. “And the reason why it might be ineffective is because, if there's just some random teenager posting something online, they don’t have any income.” Victims who file a lawsuit against deepfake porn creators would face the prospect of years in court and tens of thousands of dollars in legal fees likely paid out of pocket, since most people aren’t wealthy enough to attract trial lawyers willing to work on contingency.
A juicier target, then, would be the creators and distributors of the AI models that actually generate the synthetic pornography. In the Taylor Swift case, it appears that the perpetrators exploited a “loophole” in the AI image creation tool built by Microsoft, called Designer (which is itself powered by OpenAI’s DALL-E models.) One of the most successful litigators against tech platforms, Carrie Goldberg, suggests that the AI vendors (Microsoft in this case) could be liable under a “seller negligence” claim – i.e., Microsoft was neglectful in building and releasing a product that had such loopholes in the first place. Such a case would instantly be in the running for the trial of the decade – America’s #1 celebrity vs. America’s most valuable company, with many novel legal questions about AI for courts to consider.
We’ll see if Swift v. Microsoft materializes. In the meantime, any attempt in Congress to clarify the civil liability for AI model developers and deployers from misuse is going to run into a headwall of tech lobby opposition, just like previous attempts to repeal the Section 230 liability shield for social media platforms. The path of least resistance is this crop of individual liability bills – and we wouldn’t be surprised to see President Biden sign one before the year is through.
Of Note
Campaigns, Elections and Governance
OpenAI and Other Tech Giants Will Have to Warn the US Government When They Start New AI Projects (Wired)
Fact Sheet: Biden-Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order (White House)
Youngkin willing to change Virginia’s artificial intelligence policy (WAVY)
Washington governor signs AI order plotting yearlong policy path (StateScoop)
Technology
We Asked A.I. to Create the Joker. It Generated a Copyrighted Image. (The New York Times)
Anthropic’s Antitrust Advantage (The Information)
Law Enforcement Braces for Flood of Child Sex Abuse Images Generated by A.I. (The New York Times)
AI-Generated Taylor Swift Porn Went Viral on Twitter. Here's How It Got There (404 Media)