Inching forward on policy
Seeing the first glimmers of legislative action at the federal level
This week’s developments
Legislative activity is picking up, with some preliminary bills and proposed regulation in the House and Senate including a notable first effort at a transparency bill from Rep. Richie Torres
Efforts are underway to define how AI should be used on political campaigns and what AI services campaign vendors can offer
Senators question Zuck over LLaMA leak
The EU Commissioner for Competition argues corporations should be kept away from AI rule-making processes
The pace of global development is picking up, presenting a challenge to US regulation - South Korean and Chinese developers both released impressive results from their own LLMs
Analysis: Houston, we have a transparency bill!
Then-Congressman-elect Ritchie Torres on December 7, 2020
( LEV RADIN/PACIFIC PRESS/SHUTTERSTOCK )
Congressman Richie Torres, a Democrat from NY-15, introduced a bill that would mandate that any content created by generative AI require disclosure. The bill is basically a 1-liner and still needs a lot more definition, with some obvious questions about what constitutes generative AI, who needs to disclose (model developers vs users), how enforcement will work, etc. The bill is likely dead on arrival in the GOP-controlled House Energy and Commerce Committee, it’s highly unlikely that Republicans would support additional regulatory oversight by the FTC, and the FTC isn’t exactly the best fit to enforce distributed harms like these, but it’s the right start.
We’re big fans of AI transparency legislation. Consider the flood of AI-generated content that will bog down basic administrative processes and public comment processes, as well as spam consumers. If agencies and individuals have a way to sort AI-generated content into a separate category from human-generated content, they can give more weight to those that are meaningful, non-AI content, and arrive at decisions that reflect the actual majority opinion of constituents instead of one motivated individual with a prolific chatbot. If these policies can be effectively enforced via a regulatory agency or private right of action, it will force companies and individuals to think carefully about what they use generative AI for, and there should be a significant disincentive to ‘flood the zone’.
We expect to see more of these kinds of bills as members of Congress prepare for the next election cycle and want to be viewed as leading on AI with their constituents and donors, but given the lengthy and challenging process of passing federal legislation (especially given the current House chaos), there’s a big opportunity to start at the state level. There’s actually already a bill on the books in California called the Bot Disclosure and Accountability Act (2019), which requires disclosure in sales and political contexts, but it was written prior to the new era of sophisticated and commercially available LLMs, and there’s little to no enforcement. It would be smart to extend this to other domains and allow for private right of action so that it individuals can take action if entities violate it.
News
The US Government Round-up
Sam Altman and Sen. Richard Blumenthal (Haiyun Jiang/The New York Times)
In addition to Torres’ House bill, there’s also some preliminary bipartisan work on AI policy gaining steam in the Senate. Much of the discussions are happening in the Senate Subcommittee on Privacy, Technology, & the Law with its chairs Sens. Blumenthal (D-CT) and Hawley (R-MO). Hawley is pushing for legislation that will, among other things, require licensing for model development. This is an interesting legislative avenue, and one that OpenAI and Microsoft have been pushing for (perhaps not least because it benefits large corporations like their own). Things are moving fast both in the open source world and internationally, though, and it remains to be seen if licensing requirements meaningfully affect outcomes when it’s rapidly becoming easier to train and access models.
House Democrat's bill would mandate AI disclosure (Politico, 6/3/23)
Josh Hawley trying to grab the reins on AI regulation, circulating proposals that’d restrict AI access to children, ban interactions with China, and require licenses to create generative AI models. (Axios, 6/7/23) Requiring a license to train models is in line with OpenAI/Microsoft’s regulatory positioning.
Sen. Blumenthal is sketching out the framework for a new AI agency (POLITICO Pro, 6/7/23)
How Sam Altman Stormed Washington to Set the A.I. Agenda (NYT, 6/7/23) Altman has visited with more than 100 lawmakers, and the article features a quote from none other than Josh Hawley.
Senators send letter questioning Mark Zuckerberg over Meta’s LLaMA leak (VentureBeat, 6/7/23) Which Senators, might you ask? The Chairs of the Senate Subcommittee on Privacy, Technology, & the Law: Richard Blumenthal, and…Josh Hawley.
Air Force “Killer AI Drone” story ends up as disinformation (The War Zone, 6/1/23) A story quickly went viral last week about a simulated AI weapons system that ended up killing its controllers in order to maximize the results of its mission of destroying anti-aircraft missiles. Turns out, it didn’t happen.
Microsoft Is Bringing OpenAI’s GPT-4 AI model to US Government Agencies (Bloomberg, 6/7/23)
Use AI to regulate AI, Google executive says (POLITICO Health Care, 6/7/23)
Global developments
We need to keep CEOs away from AI regulation (Op-Ed, Financial Times, 6/5/23) Written by a special advisor to the EU Commissioner for Competition, Marietje Schaake.
Imagine the chief executive of JPMorgan explaining to Congress that because financial products are too complex for lawmakers to understand, banks should decide for themselves how to prevent money laundering, enable fraud detection and set liquidity to loan ratios. He would be laughed out of the room. Angry constituents would point out how well self-regulation panned out in the global financial crisis. From big tobacco to big oil, we have learnt the hard way that businesses cannot set disinterested regulations. They are neither independent nor capable of creating countervailing powers to their own… Somehow that basic truth has been lost when it comes to AI.
The Shanghai AL Lab releases results from InternML (6/7/23) On performance benchmarks, it scores above ChatGPT, LLaMA, and others, but falls behind GPT-4. And: ”InternLM demonstrates excellent capability of understanding Chinese language and Chinese culture, which makes it a suitable foundation model to support Chinese-oriented language applications.” It scores well on Marxism and Mao Zedong Thought.
South Korea’s Naver to target foreign governments with latest ChatGPT-like AI model (Financial Times, 5/28/23) The Google of South Korea is targeting Spain, Mexico, and the Arab world for custom foundation models without US data controls at play.
UK to host major AI summit of ‘like-minded’ countries (POLITICO, 6/7/23)
New companies are being founded to make running foundation models easy on regular computers, too. (Importantly: not training them.)
Campaigns and elections
National using AI for attack ads: The AI political campaign has arrived (Stuff, 6/5/23)
It’s the wonky eyeball that gives it away. In the Instagram photograph, a woman stares out the window into a dark street. A caption about home invasions runs underneath. The woman looks anxious. But, she also looks off ... not quite human. The AI political campaign is here. In the last month, National has published at least four images generated by artificial intelligence to its social media accounts.
Using AI to defend and win in politics (Higher Ground Labs, 6/2/23)
We can use generative AI to our advantage in politics and campaigns. When used well, AI can be an equalizer and a timesaver. It allows us to automate tasks and augment the work of our people. This is a generational opportunity for Democrats to get ahead.
Within the distributed ecosystem that is the Democratic Party, political venture capital firm Higher Ground Labs is trying to grab the generative mantle. Co-founder Betsy Hoover names content generation and financial management as two areas of focus among portfolio companies.
Miscellaneous
Study finds a 14 percent increase in customer service productivity with AI tools (Stanford Digital Economy Lab, April 2023)
ChatGPT took their jobs. Now they walk dogs and fix air conditioners. (Washington Post, 6/2/23)
Eating-disorder group’s AI chatbot gave weight loss tips, activist says (Washington Post, 6/1/23)