This week, Google requires AI-generated media to be disclosed in political ads, the US and UK continue to weigh policy options while AI professors ask for a dedicated agency, and Chinese operatives are using generative AI to boost their influence operations…
Google requires disclosure of AI-generated imagery in political ads
On Wednesday, Politico reported that Google will mandate disclosure of any AI-generated video, audio, or images used in political advertisements run on their platforms. Enforcement will begin in November.
Google’s policy refers to “synthetic content” which makes it “appear as if a person is saying or doing something they didn’t say or do,” as would “a realistic portrayal of an event to depict scenes that did not actually take place” – presumably covering the RNC’s apocalyptic deepfake ad earlier this year. “Inconsequential” edits like cropping, color correction, and red-eye removal are excluded.
Evaluating consequentiality of edits is a subjective judgment that will be made by human reviewers for the foreseeable future. The actual technical edits, like cropping or color correction, are currently a black box to these reviewers, since Google will receive a finished asset without an edit history. It would be quite helpful for these reviewers to have access to actual provenance history when evaluating what’s been doctored. If you squint a bit, you can see Google planning for a near future in which the reviewers would have just such access.
As we covered in a previous newsletter, the C2PA is an industry group building a cryptographically verifiable standard for content authenticity. Part of the C2PA’s standard includes a verifiable record of content edits, specifically including cropping and color adjustments. Photoshop currently supports C2PA as a beta feature, but since there are no current benefits (like increased distribution on social media) to verifying authentic content in a C2PA-compliant pipeline, adoption is nearly non-existent.
Political ad compliance is exactly the kind of wedge that C2PA needs. To enforce their own new policies on synthetic media, Google and other platforms could require political ads to come with a verifiable history of edits and generation. Although this data wouldn’t help reviewers evaluate whether or not they are “consequential” to ad claims, it would help reviewers determine exactly how much doctoring and deepfakery went into the content to begin with. And by narrowly requiring compliance by a set of advertisers that is both very narrow and very consequential – whose ads change the outcome of elections – Google would accelerate the path to verifiably authentic content on the internet, while limiting any near-term friction to its core advertisers and impact on its core advertising business.
Even without image provenance and edit history, the policy is consequential. YouTube is the most popular video platform across desktop, mobile, and the living room, and Google is one of the largest players in the political digital advertising space, with $437M in spend in the 2022 cycle and growing fast. An even larger, related category - connected TV (Hulu, Tubi, etc.,) - accounted for $1.1B in political ad spend last cycle, and will have an increasingly important position in the political landscape into 2024 as traditional TV viewership fades. It’ll be essential that other ad-supported video networks roll out similar policies quickly as more political ad money pours in.
Industry and government stay cozy into the fall
As the summer wraps up, AI regulatory efforts in the US and UK are showing tepid activity. Sen. Chuck Schumer’s second AI Forum is next Wednesday – the one with Musk, Zuckerberg, Altman, etc. – while the UK’s AI Safety Summit is planned for early November. Schumer is promising a ‘supercharged’ effort when Congress returns from recess, and there’s still little certainty about concrete policy plans, but a few tea leaves point to incremental, cautious efforts:
Framing around ‘mitigating risks and maximizing benefits’ with concerns about competition from China limiting regulatory scope
Prominent footprint of industry leaders alongside smaller delegations from labor and academia
Moderate urgency: Schumer says they’ll move fast, but it’s been almost a year since ChatGPT’s release kickstarted initial policy conversations and no meaningful progress has been made
It’s also worth noting that Sen. Todd Young, quoted extensively in Fox News coverage of the forum, has previously gone on the record mirroring Google’s preferred approach to AI regulation, which is to punt on creating new agencies or legislation and instead use existing agencies to manage AI risks.
In contrast, a new agency is exactly what a plurality of 213 computer science professors at top US universities favored. In Generation Lab, Axios and Syracuse University’s new survey, the academics are ‘somewhat optimistic’ overall about where AI’s effects on society will net out, but they are also heavily in favor of government regulation, with 5 in 6 saying AI can and should be regulated. Notably, only a handful of the hundreds of professors surveyed believe that industry can self-regulate. In addition to a dedicated agency, respondents favored mandated transparency and explainability measures and clear liability frameworks for who is responsible for AI use.
Of note
Politics and National Security
China suspected of using AI on social media to sway US voters, Microsoft says (Reuters China, North). Underlining the need for platform controls is Microsoft’s finding that Chinese operatives have been using generative AI since March to generate visuals for social media posts intended to sway public opinion in America and other western countries. It’s worrisome that the Microsoft researchers specifically noted that “relatively high-quality visual content has already drawn higher levels of engagement from authentic social media users.”
Ron DeSantis’ Super PAC Thinks It Has Cracked the Code on Delivering His Message (Politico). Long article on DeSantis’ Super PAC, including an admission that it was sending prospective voters texts with an OpenAI bot, which would likely be a violation of OpenAI’s terms.
Governor Newsom Signs Executive Order to Prepare California for the Progress of Artificial Intelligence (Office of Governor Gavin Newsom). States continue to take their tentative steps on AI regulation, most notably with Newsom’s executive order that calls for a number of provisions including risk analysis reports, state employee training, university partnerships, and more. Wisconsin also announced additional task forces to study the effects of AI.
Prosecutors in all 50 states urge Congress to strengthen tools to fight AI child sexual abuse images (AP News). A proactive effort to screen existing areas of concern for ones where AI could make things worse rather than a bottoms-up problem.
Copyright and Authenticity
Preserving trust in photojournalism through authentication technology (Reuters). A fascinating proof-of-concept test of C2PA image provenance authentication for photojournalism coverage of the Ukrainian war.
Industry
Cruise CEO says backlash to driverless cars is 'sensationalism' (Washington Post)
What OpenAI Really Wants (WIRED) A detailed profile of OpenAI which includes some backstory on how OpenAI has avoided an adversarial reception in Washington. ““Sam has been extremely helpful, but also very savvy, in the way that he has dealt with members of Congress,” says Richard Blumenthal, the chair of the Senate Judiciary Committee.”
Microsoft Says It Will Protect Customers from AI Copyright Lawsuits (Bloomberg) Presumably to deal with objections from potential large corporate customers to AI adoption.