Welcome to our new subscribers this week. If you haven’t subscribed already, join us:
This week, political coalitions form among big AI companies, open-source actors and publishers to start pressing advantages in politics and the courts; AI warfare is becoming a thing sooner than we’d hope for, and more…
Coalitions emerge in the AI debates
Last week, the White House and a group of large AI companies agreed on a set of aspirational norms; this week, coalitions are emerging. Anthropic, Google, Microsoft, and OpenAI launched the Frontier Model Forum, an industry body focused on ensuring safe and responsible development of frontier AI models. They say it’ll be a cross-company forum that will help share safety best practices, coordinate research, and share knowledge with academics, government, and others. These are all good things, but the initiative is also an obvious attempt to head off external regulation.
The effort will assuredly not be completely effective at achieving its stated goals. Notably missing from the coalition is Meta, one the largest players in the space, not to mention representation from open-source efforts. Anyone who isn’t in the coalition doesn’t have real incentives to follow its guidance. To use a tangible example, if the forum urges the placement of watermarks on AI-generated images, and open-source efforts don’t follow that guidance in order to gain market share, it’s easy to see how adherence will fall apart without government enforcement. For now, it’s effectively a facsimile of the talking points that came out of last week’s White House announcement, so it’ll be up to these companies to prove there’s substance beyond the press release.
In parallel, TechNet, a tech industry lobbying group whose members include Amazon, Apple, Google and Meta, announced a $25M AI "education" campaign to put pressure on Congress that AI ain’t all bad. The campaign will consist of Capitol Hill events, digital and TV advertising, and “coalition-building,” — all par for the course in tech lobbying.
On Wednesday, another emerging coalition – companies with interests in open-source AI – submitted suggestions to the European Parliament to modify rules easing potential burdens on open-source AI developers without compliance resources. This group includes Hugging Face, GitHub (Microsoft), and others. They object to requiring “costly” third-party auditors, a clarification that hobbyists and researchers dealing with open-source tools should not be defined as subject to regulation, and more. As we wrote about last week, the overlapping AI supply chain of tools and providers can be quite complex and challenging to translate into clear regulatory guidance, and we’re seeing this play out as this landmark legislation continues towards becoming a reality. (The Verge)
AI for defense: the trajectory seems clear
There’s a lot of exploration this week about the theory and real-world effects of AI on warfare. There are two major elements to the discussion: nuclear weapons-esque great power competition theory, and the immediate, messy, day-to-day reality of how autonomy plays a role in the Ukraine fight.
The theory
Alex Karp, the CEO of Palantir, argued in a New York Times op-ed for proactive development of AI weapons, adapting the nuclear weapons deterrence argument to AI – if we don’t, they will. Jeremy Ashkenas, an editor in the Times’ opinion section, added his own twist in Not Everyone Is Against A.I. Weapons Research. Here’s Why., noting that Karp’s argument will likely come true if a global effort isn’t made to sit out the arms race for the good of humanity. The global proposals shopped around by governments often concentrate on economic effects of AI; a nuclear weapons-style global test ban of military AI doesn’t seem to be coming anytime soon. The implementation details of such a ban would also be complex. Networked autonomy is harder to boil down into something like global caps on the number of weapons or rockets a country has, and as we see in the Ukraine war, making tiny daily adjustments to improve your side’s advantage are the difference between life and death.
The reality
In Ukraine, both domestic and foreign drone manufacturers are using AI and other forms of autonomy to increase survivability for their drones, and the press is picking up notice for both its role in everyday fighting and implications for the future. In The war in Ukraine is spurring a revolution in drone warfare using AI, The Washington Post profiled a Ukrainian drone manufacturer that uses “AI” to keep drones locked on their target even if their radio and cell capabilities are jammed. WIRED profiled a German defense software company, Helsing, an Anduril-esque operation focused on the European “democracy” market, and claiming to be deployed in Ukraine, like Anduril itself.
Except perhaps in Karp’s argument, the “AI” mentioned breathlessly in the headlines here is not the large language model-driven type of AI that’s recently but almost assuredly simpler automated rules. That shouldn’t be so reassuring — if it works, it doesn’t need to be fancy. Once something is automated, the temptation is to enhance that automation over time. From the perspective of the warfighter, that improvement may be critical; for civilization, it will be worrisome, and it’s essential to develop shared guidelines wherever possible.
A press coalition forms around copyright
Publishers want billions, not millions, from AI (Semafor)
In the latest copyright law development, Barry Diller’s company IAC is organizing publishers to bring one of the largest challenges to date to AI companies training LLMs.
Now his company, IAC, and a handful of key publishers are close to formalizing a coalition that could lead a lawsuit as well as press for legislative action, people at those companies said. The group crucially includes the two industry pillars, The New York Times and News Corp., as well as Axel Springer.
Of note
Lindsey Graham and Elizabeth Warren: When It Comes to Big Tech, Enough Is Enough (Op-Ed) Two polar political opposites propose a new tech regulatory agency.
Researchers Poke Holes in Safety Controls of ChatGPT and Other Chatbots (NYT). Jailbreaks working in a new paper even on the latest models across companies.
ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ (CNN) Even OpenAI can’t detect AI-generated content.
Dario Amodei interviewed on the Hard Fork podcast. An informative interview with the safety-focused CEO of Anthropic, who succinctly explains the nitty-gritty behind their “constitutional AI” approach.
How AI and deepfakes are changing politics (CBS News) Not a lot of new substance here, but a prominent placement.
👋