A flood of AI talk in DC, in the states, and even the G7
May 19, 2023: Mr. Altman goes to Washington, amid fans and critics
Welcome
This week, Sam Altman seemed to charm Congress, and activity around AI regulation continues to increase across the European Union and the US federal government, as well as percolating in state legislatures.
The rapid growth of the global interest in artificial intelligence regulation is notable – it’s only been seven months since ChatGPT was released but the consumer app catalyzed a frenzy of discussion on how to best regulate AI. Many of AI’s biggest risks are are yet to materialize, but many argue that we’re already behind on regulation and the furious pace of development seems to buoy this concern. Widespread regret about the lack of regulation around social media’s societal impacts has also left many US lawmakers wary of waiting for tech companies to self-regulate and worried about the worst harms coming to pass before guardrails are in place.
It’s dizzying to keep up with these developments, and that’s why we created the AI Political Pulse newsletter to keep up. We’ll cover the state of affairs at every level of government. Please subscribe for weekly updates.
News
AI makes the agenda for the G7 (NYT, 5/18/23)
“American officials say that in the case of chatbots, even a vague foundational discussion may help in establishing some shared principles: that the corporations that bring products using the large-language models will be primarily responsible for their safety, and that there must be transparency rules that make it clear what kind of data each system was trained on.”
This lawmaker stands out for his AI expertise. Can he help Congress? (Washington Post, 5/17/23)
A profile on Rep. Jay Obernolte (R/CA-23) notes his computer science degree, work history as a video game developer, and emergence as one of the more tech-knowledgeable Members alongside Ted Lieu and Don Beyer.
Sam Altman’s Congressional Testimony (BBC, 5/17/23)
“Mr Altman said a new agency should be formed to license AI companies… Mr Altman told legislators he was worried about the potential impact on democracy, and how AI could be used to send targeted misinformation during elections - a prospect he said is among his "areas of greatest concerns". "We're going to face an election next year," he said. "And these models are getting better."
He gave several suggestions for how a new agency in the US could regulate the industry - including "a combination of licensing and testing requirements" for AI companies, which he said could be used to regulate the "development and release of AI models above a threshold of capabilities…What was clear from the testimony is that there is bi-partisan support for a new body to regulate the industry.”
Behind closed doors, Altman warns against too much regulation (NPR Politics Podcast, 5/17/23)
NPR political and disinformation reporters report on Sam Altman’s testimony to congress and note that while he warned against AI risks and harms and being open to regulatory action, he also warned them behind closed doors that “aggressive regulation could hurt the growth of AI.”
AI can be “misused in ways that are turbocharging fraud” says FTC chair Lina Khan (On with Kara Swisher podcast, 5/17/23)
Chair Khan notes a “doubling” (+5) of the agency’s technology staff, the positioning of the FTC, and the need for foundation model developers to do safety testing before release, not waiting to clean things up after release.
Stability.AI, maker of Stable Diffusion, submits letter to the Senate to argue for open models (Stability.AI, 5/17/23)
Counter-positioning against OpenAI and other large companies building a regulatory moat, they argue:
“Open models and open datasets will help to improve safety through transparency, foster competition, and ensure the United States retains strategic leadership in critical AI capabilities. Grassroots innovation is America’s greatest asset, and open models will help to put these tools in the hands of workers and firms across the economy”
Europe takes aim at ChatGPT with what might soon be the West’s first A.I. law. (CNBC, 5/15/23)
“A key committee of lawmakers in the European Parliament have approved a first-of-its-kind artificial intelligence regulation — making it closer to becoming law.”
The EU AI Act creates a risk-based framework for regulation, and bans activities including biometric categorization of personal attributes, social scoring, predicting criminal offenses, and others, and imposes safety checks, data governance, copyright observance, and other measures before making foundation models public. Numerous steps remain before the Act comes into force.
State of play
US state lawmakers have proposed AI bills in Connecticut, Texas, Illinois, California, and many more states (National Conference of State Legislatures compiled a good list, although it’s a month out of date.)
Investors suggest AI could be deflationary, boost profit margins
Goldman Sachs Research: “.. Artificial intelligence represents the biggest potential long-term support for profit margins. Our economists' productivity estimates suggest AI could boost net margins by nearly 400 bp over a decade.” (CNBC, 5/17/23)
Bridgewater Co-Chief Investment Officer Karen Karniol-Tambour, in noting the power that offshoring and previous technology innovation had in creating deflationary forces in the global economy, argued that AI could be one of the last remaining tools to maintain that dynamic in the face of inflationary pressures in the global economy. (Sohn Conference, 5/9/23)
Microsoft’s Chief Economist compares AI to the invention of internal combustion engines with promise and peril (MarketWatch, 5/3/23)
“I hope AI will never ever become as deadly as internal combustion engines, but I’m quite confident that AI will be used by bad actors and yes it will cause real damage.”
Internal combustions were a “wonderful invention” but still cause thousands of deaths, he mentioned.
Google: AI should not be considered an inventor (Axios, 5/15/23)
“The US Patent Office is currently soliciting comments on AI technologies and inventorship…AI technology should not be considered an "inventor" by U.S. patent law, Google argues in a new filing with the U.S. Patent and Trademark Office first shared with Axios.”