Is that a pile of GPUs or Ron DeSantis?
This week, DeSantis attempts an even less human form of communication; AI is radically democratizing content moderation, and everyone’s desperate for chips
Welcome to our new subscribers this week! If you haven’t subscribed, join us:
'I Was Programmed To Value Your Opinion,' Says DeSantis (Bot)
Last week we wrote about the possibility of campaigns using AI-generated content in personalized conversations to sway voters, and this week it seems the DeSantis campaign or an affiliated SuperPAC is taking a stab at it.
Alan Johnson, a software engineer in South Carolina, posted about his experience receiving texts that seemed awfully like what you’d get out of a conversation with ChatGPT. It’s hard to look at these texts and think anyone would be persuaded to vote for DeSantis (or even disclose useful personal information) after this interaction, but it’s notable not only because it’s one of the first instances of a political actor attempting interactive voter ID or persuasion using the newest generation of LLMs, but because it raises thorny legal and regulatory questions.
It’s worth reading through the (very brief) exchange in full, but there are two key messages to note - the initial one, ostensibly from a volunteer named Liz, and a later message confessing that the texter is actually an AI language model:
What exactly is going on here? Is a human or an AI model sending these messages, and does it matter?
Without an inside view of the campaign operation, we can surmise three possibilities:
100% bot: an LLM-based bot is generating and sending every message, including pretending to be campaign volunteer “Liz” in the first message.
Bot content but human sender: A human (“Liz”) is pressing ‘send’ on whatever content an LLM is generating in response to the recipient.
Initial human sender, subsequent bot content and sender: A human (“Liz”) sent the first message, but a bot is automatically responding to every message with no human in the loop after Liz’s first interaction.
Long before ChatGPT was in consumers’ hands, campaigns (and any other entity who wanted to send you a text) needed to comply with the Telephone Consumer Protection Act (TCPA). The TCPA requires recipients of an automated text or call to their cell phone to opt-in to receiving the communication. (The Supreme Court watered down the TCPA in 2021, but the requirement lives on in state-level regulations, particularly in Florida.)
Whether or not this conversation ran afoul of existing laws like the TCPA and/or California’s Bot Disclosure Act depends on both the recipient’s state of residence and the actual combination of bot and human generating and sending these messages. It’s a very murky area to operate in, and probably opens the DeSantis campaign (or affiliated SuperPAC) up to liability since it’s unlikely they were able to filter out any recipients who may have a South Carolina number but now live in Florida or California.
Beyond possible legal breaches, the more interesting question is that of identity. Regulation of AI in communications will have to contend with what, exactly, counts as strictly AI-generated content, and what’s considered human interaction or modulation thereof. If a bot generates all the content but a human presses send, does that count as a bot or does that final touch make it sufficiently human?
Transparency whenever possible is ethical, engenders trust over the long-term, and generally creates accountability for any content an organization would send. Requiring disclosure whenever a bot is generating and sending communications without any human oversight (100% bot scenario) is both sensible and imminently necessary. And, since it’s much easier to generate convincing LLM-based spam, requiring disclosure would allow the recipient to filter out low-value interactions and invest in meaningful human conversations.
And what of a human previewing, possibly editing, and manually sending LLM-generated content? A lack of sufficient oversight would lead to a poor consumer experience akin to that of the DeSantis texts, but properly trained humans could intervene prior to sending, reducing the risk of sending patently wrong, inappropriate, or overly manipulative content. In general, it seems sensible that AI regulation should encourage communication applications with humans in the loop, and punish applications that use AI without disclosure.
Democratizing content moderation for good…and the unknown
Generative AI is democratizing essay writing, image creation, and… unexpectedly, content moderation. Content moderation – removing undesirable content like exploitative sexual material or graphic violence en masse from Internet platforms – is a technically difficult, emotionally draining, and politically fraught area that was historically mostly a problem for large social media companies. (And one where AI systems have long played a role.) Now, generative AI is placing content moderation in the hands of everyone from scrappy tech companies and parents to Iowa teachers forced to ban “explicit” books from their schools:
To identify mobile apps that might pose dangers to children, a team of UC Berkeley and UMass computer scientists used an AI model to screen iOS and Google Play app store reviews for signs of user complaints. Of 550 apps checked, 14% had seven or more complaints, and the AI was a more efficient and effective method of screening than parents scrolling through reviews individually.
In Iowa, following the Republican state government’s ban on books that depict sexual acts in schools, local school administrators are using ChatGPT to determine if these books violate the new law. The Iowa Gazette details the thinking. ChatGPT banning books sounds ridiculous as a headline but it turns out to be a locally rational response to a ludicrous situation. The school libraries hold thousands of books, are not cataloged based on their sexual contents, and for the teachers and librarians would face disciplinary actions for non-compliance.
In Iowa, where ChatGPT helped relieve immediate pressure on these teachers to comply with a law that many disagree with, haphazard manual compliance may have actually produced a better outcome for students. The Iowa teachers noted they’ve never actually received a complaint from parents about a book in a library, and so the harm here seems strictly hypothetical, while the downside of censorship en masse is huge. The benefits are more clear-cut when the content is obviously horrifying. Without effective trust and safety operations, the mobile apps screened by the researchers could enable sexual predators to run free. An inexpensive and effective content moderation AI, like the one announced by OpenAI earlier this week, would be a big help in shutting that down.
Content moderation can be a devastating job for the humans behind it. Enhanced AI moderation prevents these harms while also broadening access to companies that wouldn’t ordinarily be able to moderate effectively – but extreme care will need to be taken to ensure it isn’t applied to censor merely controversial content.
Chips, chips, chips
The lifeblood of the burgeoning AI industry (and everything else in modern life), semiconductors are in increasingly short supply. Everyone from geopolitical power players to scrappy AI startups are competing for access, and the winners will have a major advantage. The New York Times’ The Daily podcast produced an excellent long-form episode detailing the impact of the United States Bureau of Industry and Security’s October 2022 export controls on advanced semiconductor chips, effectively kneecapping China’s access to the most advanced chips, at least in the short-term. With the recent GPU land grab, these controls are more relevant than ever.
Of note
Campaigns
The 2023 Guide to Generative AI and Political Misinformation (96 Layers)
How a startup is using AI to write fundraising emails (The Hill)
ChatGPT leans liberal, research shows (Washington Post)
Global
Anthropic raises $100 mln from South Korea's SK Telecom (Reuters)
China Wants to Regulate Its Artificial Intelligence Sector Without Crushing It (TIME) Algorithm registry, disclosure of AI-generated content, availability of data to regulators, and public complaint mechanism, while fitting within China’s overall cultural control regime.
The AI Power Paradox: Can States Learn to Govern Artificial Intelligence—Before It’s Too Late? (Foreign Policy) Notably written by Ian Bremmer and Mustafa Suleyman (founder of DeepMind), calling for a new worldwide AI risk body.
Copyright and authorship
Miscellaneous
Will A.I. Become the New McKinsey? (The New Yorker)
What happens when thousands of hackers try to break AI chatbots (NPR)
AI is setting off a great scramble for data (The Economist)
Use of AI Is Seeping Into Academic Journals—and It’s Proving Difficult to Detect (WIRED)
Professors have a summer assignment: Prevent ChatGPT chaos in the fall (Washington Post)
Robotaxi fight intensifies as California approves San Francisco expansion (POLITICO)
Threat Actors are Interested in Generative AI, but Use Remains Limited (Mandiant)
Ex-Google CEO Eric Schmidt to launch AI-science moonshot (Semafor)