Support for AI regulation is cooling
As government lumbers along, will the Internet platforms step up? And will a CEO-laden happy hour contribute to meaningful legislation?
This week, our polling meta-analysis reveals support for AI regulation is strong but moderating; AI watermarking gets better but won’t solve misinformation on its own; OpenAI passes $1bn in revenue, Schumer hosts a CEO-laden AI forum, and copyright news continues apace.
Americans remain supportive of AI regulation, though said support may be cooling
Affirmative answers to the question “Do you support federal regulation / agencies / Congressional action on Artificial Intelligence?”
Support for AI regulation remains strong but is moderating, according to our meta-analysis of nine recent public polls [1]. A recent Pew Research report showcases a few consistent trends spotted in polls run since the beginning of the year:
Americans are less concerned about AI’s impact on their job versus others’ jobs;
College-educated respondents, more likely to be working in white-collar positions, are expecting larger impacts of AI on work; and
While support for regulation is bipartisan, there are usually partisan differences depending on the question – Republicans are more distrustful of big tech but also more distrustful of regulation.
Over the past few years public support for AI regulation has been broadly popular, and that’s important for two reasons:
Politicians aren’t going to stick their necks out for legislation that isn’t popular; and
In a polarized electorate with broad skepticism of federal agencies and regulation of any kind, it’s unusual to see this kind of support for government oversight.
Given support is still over 50%, this slippage isn’t likely to have a significant dampening impact on political action, but it’s a leading indicator of how rapid and consequential said action will be. If the downward trend continues and drops below majority support, however, politicians may lose interest in regulation. It’s a trend worth watching as firms and individuals ramp up AI adoption and impacts become clearer.
[1] Included polls and date of end of survey period: Pew (11/7/21), MITRE/Harris (11/7/22), Pew (12/18/22), Change Research (3/9/23), Data for Progress (3/24/23), The Verge (4/15/23), YouGov (5/23/23), Center for Growth and Opportunity (6/8/23), YouGov (7/21/23), Leger/LA Times (7/30/23), Pew (7/23/23)
Watermarks aren’t going to solve misinformation
To date, AI’s two biggest threats to elections are the possibility of hyper personalized persuasion bots, unlikely to come anytime soon, and the clear and present danger of realistic fake photos, videos and audio that would ramp up the churn of misinformation. So far, politicians have been placing their bets on technical solutions to the latter, with no clear path forward on regulation.
This week, Google turned out one such technical band-aid and released SynthID, a tool for watermarking and identifying AI-generated images. The watermarks are imperceptible to the human eye, but detectable by specialized algorithms. It's a fascinating technological step forward: the watermark is supposedly resistant to edits like resizing, compression, and filtering, all issues that have plagued prior attempts. With this release, Google is following through on its July pledge at the White House along with other major tech companies to implement watermarking for AI-generated content, and it's one that lawmakers see as a big step forward in curbing the threat of misinformation fueled by AI-generated content.
Will this approach to watermarking content have a meaningful impact on misinformation in a fraught political landscape? We think it’s insufficient, for three reasons:
Watermarking is opt-in: There are plenty of other tools that generate images without any watermark, including open source software like Stability AI (already being used for nefarious purposes). Without clear regulatory requirements for all generative applications (including open-source models), there will always be other options for bad actors.
Bad actors will find methods to evade detection: Much like the endless cybersecurity arms race, where developers build tools to protect users and attackers find ways to beat them, it’s inevitable that for every new watermarking tool, someone will eventually find a way to fool detection.
Reach is more consequential than content: Doctored content isn’t a new issue, but modern platforms can distribute that content at scale more quickly than ever. Even in a perfect world where every tool spitting out AI-generated content also included an imperceptible watermark, it would be up to social media and other distribution platforms to label and/or demote deepfakes in their distribution algorithms. There’s no guarantee that the platforms would agree to this, whether due to technical (WhatsApp’s messages are fully encrypted), philosophical (Twitter), or state-sponsored (TikTok) motivations.
Let’s say Campaign A creates a stylized, AI-generated image of a candidate standing on the moon to represent their commitment to investing in space exploration programs, and Campaign B creates a photorealistic image of their opponent shaking hands with a terrorist. The first is benign and unlikely to mislead, while the second is misinformation that could change plenty of votes. The platform slaps an “AI generated” label on both, but labeling alone is insufficient, as people are prone to believe images are real if the content aligns with their existing viewpoint, no matter how obviously fake they are.
How should a platform moderate both of these pieces of watermarked content? If their current content moderation policies apply (a big if given the erosion of content moderation investments), the watermarks essentially have no impact.
The only way to holistically solve the inauthentic media problem is to provably authenticate legitimate (i.e. created in the real world, not digitally) content. Photos and videos taken with real cameras, along with audio recorded with a real microphone, should get their own special badge and treatment on the Internet. Building on decades of research in cryptography and identity, there is an emerging standard for content authenticity being developed by a group that includes Sony, Adobe, and Microsoft. If platforms like Meta, Twitter, and TikTok reward legitimate political content with increased distribution and prominent labeling, it won’t matter how or where doctored or AI-generated content is generated – it will never get the coveted “authentic” label.
Of Note
Policy
Pa. legislators look to rein in AI in health insurance claims (WESA) A draft bill from Pennsylvania state legislators would require medical insurance companies to submit AI algorithms to the Pennsylvania Department of Insurance to check for bias.
Schumer’s AI meeting will include top labor and civil rights advocates (The Washington Post) Sen. Schumer (D-NY) is following through on his promise to host a series of forums that will presumably inform AI policy. This is the first of such “insight forums” and is a long list of tech CEOs, with a few labor leaders and researchers sprinkled in.
Copyright
US Copyright Office issues notice of inquiry on artificial intelligence (Coin Telegraph) “The Office is undertaking a study of the copyright law and policy issues raised by generative AI and is assessing whether legislative or regulatory steps are warranted.”
AI Policy 'Weaknesses' in UK Put Artists at Risk, MPs Warn (Decrypt)
Web Scraping for Me, But Not for Thee (Technology & Marketing Law Blog)
Generative AI and intellectual property (Benedict Evans)
OpenAI disputes authors’ claims that every ChatGPT response is a derivative work (Ars Technica) A response comes for the Sarah Silverman et al book scraping lawsuit.
Websites That Have Blocked OpenAI’s GPTBot (Originality.AI) The number of major websites blocking OpenAI’s GPTBot is increasing, now including Quora and Indeed.
Technology
OpenAI Passes $1 Billion Revenue Pace as Big Companies Boost AI Spending (The Information) Notable for its top line number but also because of the descriptions of how hedge funds and trading firms like Jane Street are using ChatGPT (and developing their own LLMs)
Behind the AI boom, an army of overseas workers in 'digital sweatshops' (The Washington Post)
A.I. Brings the Robot Wingman to Aerial Combat (The New York Times)
The A.I. Revolution Is Coming. But Not as Fast as Some People Think. (The New York Times) Detailing how corporations are trying out generative AI but not deploying it rapidly in production.
China lets Baidu, others launch ChatGPT-like bots to public, tech shares jump (Reuters)
The Myth of Open Source AI (WIRED) Discussing how open source LLMs are still directly or indirectly controlled by big tech companies and resulting impacts.