No new regulation, says GOP senator
One of Sen. Schumer's top AI lieutenants quashes the possibility of Congress passing meaningful new AI legislation
Welcome to our new subscribers this week. If you haven’t subscribed already, join us:
This week, Sen. Todd Young of Indiana quashes the possibility of Congress passing meaningful new AI legislation, Nvidia offers an intriguing non-legislative pathway to restricting bad actors, researchers illustrate the danger of jailbroken models, and more…
Republican Sen. Todd Young signals existing agency approach to AI regulation
Given partisan gridlock and Congress’s anemic track record on technology regulation, we’ve long been skeptical that meaningful AI policy will get passed at a federal level, and Republican Senator Todd Young has just all but confirmed it. In an interview on the Politico Tech podcast, Sen. Young said:
“We’re probably not going to have to ban a bunch of things that aren’t currently banned. We’re not going to have to pass a lot of major legislation to deal with new threats”
Instead, Young argued for leveraging the existing federal agency structure alongside an ongoing series of forums for Congress to learn more about subtopics (copyright, workforce, alignment, national security, etc).
“Many of these laws we have merely need to be applied to current and to future circumstances. That’s going to require ongoing vigilance from the agencies.”
This is especially notable coming from Sen. Young because not only he is on the committee for U.S. Senate Committee on Commerce, Science and Transportation, which has oversight over the FTC and FCC, he’s also one of the three senators handpicked by Sen. Schumer to lead his bipartisan AI policy initiative.
It’s reasonable to rely on existing agencies when and where they’re able to regulate, but it's far from guaranteed they’ll be able to do so. AI-generated fraud is still fraud; however, the rapid pace of development and novel risks of AI – not to mention the fact that existing agencies have been limited in the efficacy of their tech regulation – suggest Washington-as-usual will not suffice.
Sen. Young is mirroring Google’s preferred approach to regulation, which we wrote about in a prior newsletter. In June, Google encouraged existing agencies to manage the risks associated with AI instead of creating new agencies or passing new legislation. It’s probably no coincidence that Google’s corporate PAC and individual employees were the 9th largest group of corporate-aligned donors (PAC + individual donations) to Sen. Todd Young’s campaign in 2022.
Sen. Young doesn’t have much to say about which agencies, exactly, will take up this regulatory burden. Most have limited jurisdiction and require Congress to expand their authority, illustrated most recently in the FEC deadlock on a key petition to require disclosure of AI-generated elements in political ads – citing…a lack of Congressional authority. Additionally, Republican members of Congress have a storied track record of undermining federal agencies at every opportunity - refusing to approve necessary budget increases, slashing funding, threatening to shut them down, blocking key commissioner nominees, and generally gumming up the gears of productive regulatory motion. If Congress pushes responsibility to the agencies without assigning the powers and funding they need to carry out said regulation, as tech lobbyists on both sides of the aisle have long encouraged, then AI regulation will effectively be ceded to the tech companies themselves.
Nvidia wants to know their customers’ customers
In June, The Information reported that Nvidia offers preferential chip access to smaller cloud providers that don’t manufacture their own chips over large competitors who do such as AWS and Google. This week, they report that Nvidia is now asking some of those smaller providers for their customers’ names.
This speaks to an intriguing non-regulatory path to restrict bad actors from training models on large GPU clusters. Since the supply of chips is so limited, Nvidia itself has a significant amount of power to influence which customers (and customers’ customers) are able to access these clusters. Though Nvidia’s approach may yield some protections, we believe that user transparency and know your customer (KYC) requirements should be enforced through regulation.
Open source, jailbreaks, and chaos
In the emerging AI discourse, closed source AI models from OpenAI, Google, and others represent controlled, “safer” AI, while open source models, like from Meta, bring a more chaotic approach that may produce some bad outcomes but also unexpected good ones that wouldn’t arise under more centralized regimes.
On the open source side, a project that helps programmers run large language models on their own computers released demonstrations of what happens when you uncensor / ”un-align” Meta’s Llama 2 model. The uncensored models comment freely on medicine, religion, and other topics, free of the barriers built in to stop them.
Even on the closed source side of the house, though, results so far aren’t that different when users are determined to break through the controls. Researchers at Carnegie Mellon have found a seemingly universal way to “jailbreak” even the closed-source language models like ChatGPT, Bard, and Claude. The researchers used the open source models to discover specific phrases of nonsense that cause the bots to “forget” their alignment training.
The AI companies can and are filtering out those specific nonsense queries, but the authors argue that there is no way to universally defend against new attack phrases, which can be generated arbitrarily. The policy implications are that even regulated, closed-source models cannot be expected to be completely foolproof. The divide between the open and closed world is not as large as it seems so far, and on the closed side of the house, one cannot expect the companies to want to be bound by any agreement or regulation that requires “safety” at all times.
Discussions continue about how AI will impact politics
Despite serious concerns about the potential risks of AI to interfere with elections by amplifying misinformation, so far campaign use cases have been limited to using ChatGPT to write fundraising emails and generating a handful of deepfake media assets. The discussion continues among electeds and consultants both in the US and globally:
In Kansas, the Secretary of State noted his fears about disinformation;
In Kentucky, a State Senate hearing on AI showcased elected officials’ movie-inspired fears about AI taking over humanity
In Brazil, political consultants are marketing courses on how AI will impact their elections
How real is the threat of AI deepfakes in the 2024 election? NPR analyzes the DeSantis PAC fake-Trump voice from a few weeks ago, while Eric Wilson, a GOP tech consultant and commentator, calls the “moral panic of AI way overblown.”
Wilson noted a significant move this week from Microsoft, whose ad subsidiary, Xandr, banned political ads. Xandr had previously pursued the political business with vigor. In this context, it’s notable due to Microsoft’s significant investment in OpenAI and recent partnership with Meta on Llama. OpenAI already dissuades political activities on ChatGPT, but it’s not a hard ban and is building out an Elections arm of its policy team.
Of note
Meta Begins Blocking News for Canadian Users (Wall Street Journal). Canada passed a law requiring social media companies to compensate news outlets. Meta then got out of the news business, and Google is following suit.
Meta open sources AudioCraft, generative AI for audio (Meta). Uses fully licensed training data.
Google announces Med-PaLM M (arXiv) A new “generalist biomedical AI” that was able to read chest x-rays better than humans 41% of the times it tried. A far shot from replacing radiologists, but progress.
AI-supported radiologists outperform radiologist duos in Swedish mammogram detection experiment (The Lancet) Comparing radiologist + AI to double radiologist readings, the human-in-the-loop model reduced the screening workload on radiologists by 44% while detecting 20% more cancers.
A.I.’s Inroads in Publishing Touch Off Fear, and Creativity (New York Times) Speculating on AI’s effects on the publishing industry, one that’s endured many waves of technology-driven restructuring.
The race to find a better way to label AI (MIT Technology Review) A cryptographic approach to watermarking the origins of images to establish their provence in a deep fake world.