Quick Hits
The EU Parliament passes the AI Act, a major AI regulatory framework; the UK opens up for AI business; Congress keeps digesting it all.
AI-driven job displacement forecasts vary, but the numbers are trending large; even with a large number of jobs impacted, a gradual speed of change will hopefully be manageable, some think.
Benefit and Harm Forecasts Evolve
Part of the challenge with AI policy is that the economic anxieties posed by advanced AI are easy to envision, but still often hypothetical.
In the realm of job displacement, McKinsey Global Institute argues that AI could add $2.6-4.4 trillion dollars in GDP. It names the banking, high-tech, and life sciences sectors as particularly ripe for change. What sort of change? Chiefly, reductions in cost (read: humans).
Sarah Kessler at the New York Times thinks that forecasts like McKinsey’s can be worthwhile exercises, but cautions that historical exercises in predicting large job market changes have often been wrong. She keys in on an essential point — while AI excels at automating some specific tasks, few jobs are only these tasks. (The A.I. Revolution Will Change Work. Nobody Agrees How.)
So instead of immediate, wholesale AI-driven layoffs, the goldilocks outcome may be gradual productivity gains that shift the labor mix and allow the workforce to adapt over time But if the AI development curve stays steep, a gentle AI economic curve will be hard to see.
In the U.S.
There’s activity across DC, with leadership from the White House, Congressional leadership, and large corporates debating AI regulation. White House Chief of Staff Jeff Zients was quoted on Pod Save America, of all places, saying:
"Corporations, the leading [AI] corporations, need to do the right thing and they're stepping forward with commitments that we're working on right now that will be announced relatively shortly.” (Pod Save America interview)
In Congress, Leader Schumer’s first AI briefing starred Antonio Torralba, an MIT machine learning professor. A bipartisan group of 62 Senators attended! Whether motivated by genuine interest, or fear of blurting out a “Series of Tubes”-like stinker on TV, 60%+ attendance is a number to be celebrated.
In the House, Rep. Tony Cardenas (D-CA) sat for an interview with the Washington Post to talk up bills that would require disclosure of AI content and fund enforcement staff for the FTC. This echoes the AI disclosure bill we covered last week from Rep. Richie Torres, and some of the provisions of the EU AI Act passed yesterday.
On the corporate front, while Microsoft and OpenAI have called for a new AI-focused federal agency, Google DeepMind argued for a “hub-and-spoke” regulatory model that’d empower an existing agency like the National Institute of Standards and Technology rather than creating a new one.
The proposal touches on who they think should have liability, security standards, and tradeoffs to competitiveness and model performance involved in implementing some aspects of explainability, audits, etc. (Google)
Meanwhile:
A Georgia radio host is suing OpenAI for defamation for allegedly hallucinating that he had a legal complaint filed against him for embezzling money. (Bloomberg Law)
DeSantis’ presidential campaign used AI-produced images of Donald Trump hugging Dr. Fauci (Huff Post)
In Europe
Europe moves ahead on AI regulation, challenging tech giants’ power (Washington Post) — Yesterday, the EU Parliament approved the EU AI Act by a large margin. The EU AI Act introduces controls on using facial recognition for law enforcement, introduces mandatory disclosure of the use of AI content, and mandates disclosures of copyrighted material used in training.
The Act has been in gestation for years and was recently adapted to include risks from generative AI.
Taking a different approach than the EU (Brexit!), the UK Government announced a £100 million fund to work on AI safety, featuring “early or priority” access to models from DeepMind (headquartered in London), OpenAI, and Anthropic.
These efforts, along with an AI summit later this year with US participation, suggest a potential future Anglo-American regulatory regime as a laissez-faire counterbalance to Brussels.
In California
POLITICO has a great piece on the dogged efforts of Baroness Beeban Kidron to push for the Age Appropriate Design Code in the UK, and how UK tech policy made its way into the California legislature and ultimately became a de-facto national standard in the U.S. Could a similar British figure (Geoff Hinton?) emerge to whisper AI policy ideas in the ears of California policymakers?
The U.S.’s 50 different state legislatures and a Congress largely unable to agree on any major new laws poses a huge challenge to would-be reformers of the tech industry. But in that void, guerrilla-style figures like Kidron can have outsized influence by working behind the scenes.
Her idea is that the onus belongs on the big tech companies, not parents and guardians of kids, to reshape their services to treat kids differently from adults using these digital products. Not everyone agrees — some conservative states are passing laws relying more on parental oversight than the corporate reforms in California’s law. And by virtue of California’s size and importance to the tech business, Sacramento’s legislation has become the country’s de facto standard.
Other
Humans Are Biased. Generative AI is Even Worse. (Bloomberg)
The world according to Stable Diffusion is run by White male CEOs. Women are rarely doctors, lawyers or judges. Men with dark skin commit crimes, while women with dark skin flip burgers.
Awesome visualizations, not-so-great conclusions.
Opinion: The Law Is Coming for AI—But Maybe Not the Law You Think (The Information)
Dutch law professor M.R. Leiser thinks that generative AI companies could win the copyright battle, but risk serious privacy violations…at least in the EU.
Google Books, for instance, parried legal challenges from authors, publishers and other stakeholders for more than a decade. In the end, a federal judge ruled that Google’s ambitious digitization effort constituted fair use, a legally recognized defense against copyright infringement. That was a landmark decision in the tech law canon, establishing that courts must balance the private interests of copyright holders against the public’s interest in spreading information and encouraging innovation.
As with Google Books, the question for courts considering generative AI cases will ultimately amount to whether LLMs can squeeze their practices into one of the recognized copyright exceptions.
The LLaMA is out of the bag. Should we expect a tidal wave of disinformation? (AI Snake Oil)
Seth Lazar suggests that the risk of LLM-based disinformation is overblown because the cost of producing lies is not the limiting factor in influence operations. We agree. Spam might be similar. The challenge for spammers is likely not the cost of generating spam emails, but locating the tiny fraction of people who will potentially fall for whatever the scam is.
Startups are using ChatGPT to meet soaring demand for chatbot therapy (Semafor)
For the last few months, Mark, 30, has relied on OpenAI’s ChatGPT to be his therapist. He told the chatbot about his struggles, and found it responded with empathy and helpful recommendations, like how to best support a family member while they grieved from the loss of a pet. “To my surprise, the experience was overwhelmingly positive,” said Mark, who asked to use a pseudonym. “The advice I received was on par with, if not better than, advice from real therapists.”
Artificial Imposters—Cybercriminals Turn to AI Voice Cloning for a New Breed of Scam (McAfee)
Cybercriminals have taken up newly forged artificial intelligence (AI) voice cloning tools and created a new breed of scam. With a small sample of audio, they can clone the voice of nearly anyone and send bogus messages by voicemail or voice messaging texts.