This week, a flood of regulatory action in the US and UK; more details on Biden’s broad but shallow executive order; and what the order might do about deepfakes.
Did a friend or colleague forward this to you? Welcome! Sign up above for a free weekly digest of all you need to know at the intersection of AI, policy, and politics.
The US and UK lead the way on regulatory action
Only a year after OpenAI unveiled ChatGPT and astonished even AI experts with its advanced capabilities, world governments are taking major steps to regulate artificial intelligence. Three of the most significant regulatory developments to date emerged this week:
The Biden administration’s executive order on artificial intelligence, a long-anticipated and wide-ranging attempt by the administration to regulate mostly short- and medium-term risks and opportunities. It also lays the groundwork to address long-term risks by increasing AI knowledge and capabilities across federal agencies and altogether represents a marked difference from the US’s laissez-faire approach to regulating social media.
The UK AI Safety Summit’s Bletchley Declaration, an agreement signed by 28 countries that pledges international cooperation on managing long-term catastrophic risks posed by the most advanced “frontier” AI models. The declaration doesn’t include any policy specifics but is historic in its list of diverse signatories that include all countries currently leading AI development (US, China, Singapore, Israel, and Canada, among others).
VP Kamala Harris’s new initiatives on AI safety, an assortment that includes establishing a US AI Safety Institute, draft guidance on governmental use of AI, a declaration of responsible AI for military use, AI-related philanthropic initiatives, efforts to reduce AI-enabled fraud and a call for international norms on content authentication (whew!).
In addition, the UK announced the formation of its own AI Safety Institute and a multilateral agreement with eight leading tech companies to test AI models before their release, and the EU’s AI Act is within striking distance of becoming law.
There’s now international agreement on the existence of both near and far-term benefits and risks, but divergence on regulatory specifics implies that there’s far from a global consensus on what exactly those are, or how to best manage them.
The Big EO
President Biden’s ambitious and timely executive order (EO) is largely pro-innovation, focused on increasing America’s AI capabilities while clear-eyed about potential short- and medium-term risks. 20,000 words long, the order consists of eight major categories of directives that include specific objectives, assigned federal departments, and timelines for completion before the end of Biden’s term:
AI Safety and Security;
Innovation and Competition;
Labor and Jobs;
Equity and Civil Rights;
Consumer Protection;
Privacy;
Federal Use of AI; and
American Leadership
The EO should not be confused with comprehensive regulation of artificial intelligence. Instead, it reflects the limits of what’s possible within the executive branch’s powers. While the order is legally enforceable, it is reversible under a different administration. Precisely because of the limits on the administration’s power, there’s little to no mention of liability, transparency, elections, funding, and public education, among other critical topics. For comparison, a leading bipartisan Congressional proposal on AI regulation from Senators Blumenthal and Hawley would enact licensing requirements, create or clarify corporate liability, and create oversight bodies for safety and security – all of which would have clear legal authority and have protections from the whims of subsequent Presidential administrations.
As a result, we expect to see an ongoing (but fitful) push for actual legislation that will strengthen some of the initial ideas laid out in the order and expand into additional policy areas, particularly in model transparency and oversight, liability, and consumer protection. We’re already seeing the first hints of this, with Politico reporting Thursday evening that Sens. Warner (D-VA) and Moran (R-KS) are introducing a bill that would require federal agencies to follow specific safety standards developed by the National Institute of Standards and Technology.
Critics have expressed valid concerns about the order leading to regulatory capture, especially relating to a provision about reporting requirements for models over a certain size in the safety and security section. Such a requirement is controversial because larger and more established incumbents can afford reporting and compliance costs more than upstarts. Nonetheless, the order provides AI companies with a clearer regulatory roadmap within which to operate.
Earlier this year, many wondered if the federal government would take any action at all on AI regulation or even was capable of coming up with regulations for such complex technology. Now there is an emphatic 'yes' in response to both questions.
The order is formidable in scope and potential impact, so we won’t attempt to tackle a full analysis in one newsletter. Instead, over the next few weeks, we’ll take a deeper dive into different sections of the order to examine what they might mean for industry and citizens alike, starting this week with a look at the subsection on synthetic content.
The EO on Synthetic Content
The EO’s section relating to deepfakes was reportedly a personal priority for President Biden, likely because of the near-term known risks to individuals (fraud, sexual deepfakes, etc) and institutions (elections, media, etc). As we’ve covered in past newsletters, the rapidly proliferating and easy-to-use technology behind deepfake audio, visual, and video content is a clear and present danger, and this EO is a helpful but far from comprehensive solution.
Like many other directives in the EO’s suite of safety and security-focused measures, there are notable industry fingerprints on the text, with terminology pulled straight from existing efforts to watermark and authenticate synthetic content like the Coalition for Content Provenance and Authenticity (C2PA). The directives give the Commerce Department until June 2024 – the beginning of the general election season – to analyze the current and potential future best practices for tracking, detecting, and dealing with synthetic content harms. The department then has until the end of 2024 to develop guidance leveraging that research.
We’ve written extensively on the risk of relying solely on watermarking as a strategy to reduce or eliminate mis- and disinformation, so it’s encouraging to see the administration explicitly call out content authentication and provenance alongside watermarking. On the other hand, most of these technological solutions and standards will be largely moot if large distribution platforms like Meta and TikTok don’t take action to flag and categorize this content, preferably by elevating authenticated content above non-authenticated content.
Ultimately, this is a fine starting point that should force agencies to grapple with the threat of deepfakes and provide leadership on best practices in dealing with them. However, it lacks teeth and is unlikely to directly result in dramatic harm reduction. The section reads like the administration wants to acknowledge that deepfakes are an emerging risk, but lacks the authority to substantially regulate them.
The shortcoming highlights the need for Congress and state legislatures to urgently pass real legislation here. This could include prohibitions on using bots and artificially generated content to deceive voters or defraud consumers, disclosure requirements for artificially generated political advertising, and clear liability for damaging personal deepfakes.
Of Note
Government
Global adversaries, allies reach first agreement on containing AI risks (The Washington Post)
Opinion | Why Commerce Secretary Gina Raimondo is leading the White House on AI (The Washington Post)
Safety and Security Risks of Generative Artificial Intelligence to 2025 (UK.gov)
The Man Behind Biden’s Sweeping AI Executive Order (Politico)
White House AI Executive Order Takes On Complexity of Content Integrity Issues (Tech Policy Press)
Copyright
Judge pares down artists' AI copyright lawsuit against Midjourney, Stability AI (Reuters)
News Group Says A.I. Chatbots Heavily Rely on News Content (The New York Times)
IAC warns regulators generative AI could wreck the web (Axios)
Deepfakes
Fake Nudes of Real Students Cause an Uproar at a New Jersey High School (Wall Street Journal)
A.I. Muddies Israel-Hamas War in Unexpected Way (The New York Times)
Other
Doctors Wrestle With A.I. in Patient Care, Citing Lax Oversight (The New York Times)
The Future of AI Is GOMA (The Atlantic)
A Tool to Supercharge Your Imagination (The Atlantic)
AI May Soon Weigh In on Regulation (Wall Street Journal)
Google Commits $2 Billion in Funding to AI Startup Anthropic (Wall Street Journal)