This week…
Meta releases a state-of-the-art large language model for free and without restrictions, which might confound regulatory efforts like a recently failed AI bill in California
Labor starts to play offense in AI politics and the White House announces AI safeguards agreed to by large AI companies
DeSantis PAC uses a AI-generated Trump voice in a $1 million ad buy in Iowa
Wrangling Open Models like Llama 2 in California
Meta’s latest-and-greatest AI model, Llama 2, is on the loose, and it’s a challenge for emerging AI regulatory frameworks. One recent regulatory effort that illustrates the complications of regulating open models is CA AB 331, “Automated Decision Tools”, which recently premiered and promptly ran aground in the California state legislature. Introduced by Assembly Member Rebecca Bauer-Kahan and co-sponsored by incoming Speaker Robert Rivas, AB 331 would have required:
‘Developers’ and ‘deployers’ of automated decision tools to perform annual impact assessments and report the findings to the California Civil Rights Department; and
Anyone deploying these systems to notify a person subject to automated decision making of the fact, and accommodate requests to opt-out of being subject to this automated decision making.
Essentially, if automated tools are used to make decisions about someone, the person is required to be notified of the fact and have a way to request an alternative human review. The bill would have applied broadly to employment, education, housing, utilities, family planning, health care/insurance, financial services, criminal justice, legal services and voting.
This bill and other regulatory proposals haven’t come to grips with the complexity of multiple layers of training and deployment stacks. For example, AB 331 defines a “Developer” as any entity that designs, codes, produces or substantially modifies an automated system or service to make consequential decisions.
Let's consider a real-world sequence:
An open-source developer fine-tunes Meta’s Llama model and makes it available on developer platform Hugging Face (based in Germany)
Another commercial AI startup then commercializes the model for distribution
A client supplements this commercial model with their own data
And finally, a consequential decision is made by the AI system when a customer interacts with it.
The conundrum here is determining the "developer" under AB 331's definition. There are five potential contenders: Meta, the Hugging Face model creator, the commercial AI startup, the cloud provider (like AWS), and the client that added their data. Each has a role in the design, coding, production or modification of the AI system, thus potentially falling under the regulatory umbrella.
While this particular bill didn’t succeed, its concepts are sufficiently popular and broadly sensible enough that similar proposals are virtually guaranteed to surface soon - and that’s probably a good thing! At the moment there are almost no consequences for releasing potentially harmful models, and that leaves individuals exposed to everything from minor bias to dangerous decisions with no ability to appeal. However, the next iterations need to grapple with the full complexities of the technologies being deployed so that risk mitigation is targeted effectively and doesn’t chill innovation across the board.
National news
DeSantis PAC uses AI-generated Trump voice in ad attacking ex-president (POLITICO)
Although Trump did write what he says in the ad, he didn’t say it. When people talk about fearing how AI will make it easier to create misinformation in political communications, this is… pretty much it? It’s happening at large scale, on the airwaves and online, in the Republican primary between the number-one and number-two candidate. And yet, it’s not picking up much notice.
But the audio that the spot uses is not actually from Trump. A person familiar with the ad confirmed Trump’s voice was AI generated. Its content appears to be based off of a post that Trump made on his social media site Truth Social last week. The person said it will run statewide in Iowa tomorrow and that the ad buy was at least $1 million — a massive sum though one doable for the well-heeled super PAC. It will also be running via text message and on digital platforms.
Meta Unveils a More Powerful A.I. and Isn’t Fretting Over Who Uses It (The New York Times)
A significant part of the regulatory discourse in DC has been about whether foundation model builders should be required to hold a license to train models above a certain size. Meta opted for a more open approach, releasing Llama 2 this week, a new version of its open large language model that is on par with OpenAI’s GPT-3 and allowing other companies (excepting other social media companies) to use it for their own purposes without a fee.
“When software is open, more people can scrutinize it to identify and fix potential issues,” Mr. Zuckerberg said in a post to his personal Facebook page.
The latest version of Meta’s A.I. was created with 40 percent more data than what the company released just a few months ago and is believed to be considerably more powerful. And Meta is providing a detailed road map that shows how developers can work with the vast amount of data it has collected.
About half of the academic paper released alongside the model goes into significant detail on the procedures that the models’ trainers used to increase its safety before release. Proponents of open development argue that by releasing an open model, more chaos is created, which enables more effective future safeguards to be developed. But it’s also…chaos?
7 A.I. Companies Agree to Safeguards After Pressure From the White House (NYT)
The White House announced this morning that Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI had agreed to a set of voluntary safeguards, including security testing, watermarks on AI-generated material, public reporting, research on bias and discrimination, and a commitment to deploying AI resources to fight issues like cancer and climate change.
A good step, but also mostly signaling, as these large PR-conscious companies would have likely done most if not all of these things.
Labor begins to play offense
Organized labor, a significant force in American politics, has begun to ramp up on AI.
This week, the NYT profiled Ylonda Sherrod, an AT&T call center worker and VP of her local chapter of the Communications Workers of America, illustrating how AI tools have become ubiquitous in her workplace, and concerns about replacement top of mind for her and her colleagues. While AT&T claims that its uses of AI tools are focused on empowering its employees to work more efficiently, the immediate risk of automation and subsequent downsizing in the customer service industry is very real. As this Economist piece points out, technology adoption is often unevenly distributed, but the biggest winners find a way to incorporate new tech quickly.
In Hollywood, both the actors’ and writers’ guilds have focused part of their public positioning on the use of AI writing and imagery. AI is not playing a major role in production today, and the massive transition to streaming entertainment — “trading TV dollars for digital dimes” — is a much more immediate economic threat to these workers, but it’s reasonable to expect that AI will be transformative in the near future and sensible for these unions to get ahead of it.
“‘Not for Machines to Harvest’: Data Revolts Break Out Against A.I.” dives into independent fiction writers and illustrators that publish online and their pushback against the idea that AI models may use their creations as training data without compensation. While not represented by unions, these constituencies represent another facet of what is becoming modern labor.
In the WSJ’s Business and Labor Square Off Over AI’s Future in American Workplace, Amanda Ballantyne, director of AFL-CIO’s technology institute, notes:
Unions are concerned not only about job losses, but about companies using AI applications to keep tabs on workers outside of their jobs, where an AI-driven system might identify a group of workers carrying their employer-issued smartphones to a union organizing meeting….
One senator’s big idea for AI (POLITICO)
As Sen. Schumer’s AI forums continue, more hearings are scheduled (next week!), and regulatory agencies try to grab their piece of the AI pie, POLITICO dives into Sen. Gary Peters’ push to regulate the US Government’s own use of AI.
Sen. Gary Peters (D-Mich.) | Francis Chung/POLITICO
While not a headline name in the broader conversation about AI in Washington, Peters — who chairs the Senate Committee on Homeland Security and Governmental Affairs — pushed several AI bills through Congress in the years preceding this spring’s sudden hype cycle. He’s already sent two AI bills to the Senate floor this year. And last week Peters quietly introduced a third bill, the AI LEAD Act, which is scheduled for a markup on Wednesday.
His bills focus exclusively on the federal government, setting rules for AI training, transparency and how agencies buy AI-driven systems. Though narrower than the plans of Schumer and others, they also face less friction and uncertainty — and may offer the simplest way for Congress to shape the emerging industry, particularly when it’s not clear what other leverage Washington has.
Can AI Invent? (NYT)
A central goal of Dr. Abbott’s project is to provoke and promote discussion about artificial intelligence and invention. Without patent protection, he said, A.I. innovations will be hidden in the murky realm of trade secrets rather than disclosed in a public filing, slowing progress in the field. The Artificial Inventor Project, said Mark Lemley, a professor at the Stanford Law School, “has made us confront this hard problem and exposed the cracks in the system.”
But patent arbiters generally agree on one thing: An inventor has to be human, at least under current standards.
SEC’s Gensler Warns AI Risks Financial Stability (Bloomberg)
“AI may heighten financial fragility as it could promote herding with individual actors making similar decisions because they are getting the same signal from a base model or data aggregator,” he said in remarks prepared for a speech on Monday before the National Press Club in Washington. “While current model risk management guidance — generally written prior to this new wave of data analytics — will need to be updated, it will not be sufficient.”