The $100,000 reason Congress can’t regulate AI
This week, Congress relies on external nonprofits to fund staffers writing AI bills; and an alliance emerges amongst open-source AI developers.
Who’s really writing AI legislation in DC?
This week, Politico published a story about how key staffers in congressional offices tasked with writing AI legislation are funded in part by big tech companies. The staffers are part of a “rapid response cohort” of congressional AI staffers from the American Association for the Advancement of Science (AAAS), a DC-based nonprofit. However, unlike other experts placed by the AAAS over the past 50 years, the AI policy-focused staffers have their funding provided substantially by AI companies like OpenAI, Microsoft, and Google.
Regardless of whether or not this is an ethical arrangement, this situation begs the question: why is the U.S. federal government reliant on external organizations to fund the necessary staffers needed to create technology policy?
In the past few years, Congress has lost a significant number of senior staffers in key offices and committees tasked with technology regulation. These individuals have in large part gone on to work directly for the tech companies they previously regulated (or their lobbying firms). This exodus has left a vacuum of technology expertise in Congress.
It’s not surprising that staffers would find industry jobs attractive; for example, the salary range for a lead policy analyst at OpenAI is listed at $180-230k plus equity, significantly higher than the $90k they might make as a policy advisor in Congress.
During the same period of time, the complexity of technology regulation, especially involving AI, has dramatically increased. A drafter of an AI bill in Congress in 2023 must grapple with ever-evolving generative AI models, copyright law, national competitiveness, GPU export controls, dual-use technology considerations, impacts on labor markets, liability law, regulatory capture, and the risk of human extinction from artificial super-intelligence.
How can Congress expect to find qualified employees to grasp these novel issues, come up with sensible regulations, cogently explain them to their bosses, and are also willing to work for less than half of what they’d make doing the same in industry?
Whether or not each AAAS “fellow” in Congress will end up pushing pro-industry AI policies is beside the point. Long term, there is no good outcome to be had from corporations funding Congressional staff. The optics are bad, and the possibilities for conflicts of interest are too significant.
The solution for Congress to employ those with expertise to regulate AI is to pay their policy staffers industry-competitive salaries. This may be awkward given that these highly skilled staffers would end up making more than the members of Congress they work for (average salary: $175k), but this awkwardness is worth the additional taxpayer expense to pass essential AI legislation that has constituents’ interest, and only their interest, in mind.
Open source developers seek influence
One of the most contentious and important debates in AI today is between the merit (and safety) of closed-source vs. open-source AI models. Most of the biggest companies working on this technology (Google, Microsoft, OpenAI, Anthropic) don’t release the underlying technology; Meta is the largest proponent of open-source AI. This imbalance has left open-source advocates with a smaller advocacy footprint.
In response, this week, Meta and IBM corralled organizations as diverse as the National Science Foundation, IIT Bombay, the Cleveland Clinic, and NASA to launch the AI Alliance, which will “advocate for open innovation with organizational and societal leaders, policy and regulatory bodies, and the public.”
Open-source software, a model where developers release their code for anyone to use or modify, has historically been a force for good in the software industry – with many of the internet’s foundational technologies reliant on open-source software. Open-source LLMs enable startups to fine-tune models easily for their business, lower costs, and avoid linking their fate to what Google, Microsoft, et. al. choose to offer as products on any given day. Open-source enables safety researchers and academics to poke and prod for security holes and unexpected behaviors. However, some researchers don’t think leading AI models should be open-source at all. They argue that AI technology will eventually be too powerful and potentially dangerous to allow anyone access (would we “open-source” nuclear weapons technology?) and we’re better off centralizing it so that it can be carefully managed. (Notably, the AI Alliance promises to develop standards for open-source AI safety research to parry some of the concerns held by policymakers and the press.)
Some regulatory proposals like California’s (ultimately failed) AB-331, “Automated Decision Tools”, would require strict licensing for certain classes of models. While the intent is to create safety guardrails, there are concerns that these kinds of bills would lead to regulatory capture because only the largest companies could afford to comply. Eventually this could lead to a de facto ban on open-source models, leaving secondary developers seeking to use them for research or business out in the cold.
Crafting an “AI Alliance” to be the advocacy voice for open-source AI is a savvy move from Meta given their less-than-stellar reputation in Washington, but it’s also beneficial for smaller developers, academics, and researchers who have good reason to advocate for open-source models but lack the lobbying resources to do so on their own. If super-powerful AI models arrive on the scene, the risks of open-source may ultimately sink the approach in favor of closed-source models with less visibility and more centralized control. At the moment, though, open-source options contribute meaningful benefits to the market, and creating a cohesive alliance to advocate for them can elevate their arguments instead of being a footnote to a few powerful voices.
Of Note
Technology
Meta AI unveils ‘Seamless’ translator for real-time communication across languages (VentureBeat)
Introducing Gemini: our largest and most capable AI model (Google)
Announcing Purple Llama: Towards open trust and safety in the new world of generative AI (Meta)
One Year of ChatGPT: How A.I. Changed Silicon Valley Forever - The New York Times (The New York Times)
Where are all the robot trucks? (The Verge)
How AI is transforming music (Time)
Runway partners with Getty Images to build enterprise ready AI tools (Runway)
Jailbroken AI Chatbots Can Jailbreak Other Chatbots (Scientific American)
Google’s DeepMind finds 2.2M crystal structures in materials science win (ArsTechnica)
OpenAI Agreed to Buy $51 Million of AI Chips From a Startup Backed by CEO Sam Altman (WIRED)
“Or they could just not use it?”: The Paradox of AI Disclosure for Audience Trust in News (Toff and Simon)
Government
Congress and EU diverge on AI policy, as Brussels races to reach a deal (The Washington Post)
Silicon Valley’s AI boom collides with skeptical Sacramento (Politico)
The route to AI regulation is fraught but it’s the only way to avoid harm (Financial Times)
AI is driving Google’s health care business. Washington doesn’t know what to do about it (Politico)
For true AI governance, we need to avoid a single point of failure (Financial Times)
How China’s $70 copyright ruling impacts the world (Semafor)