Waning interest, rising momentum
As ChatGPT usage fades a bit, the machinery of government gets going
There’s a disconnect brewing. The public’s interest in AI might be cooling, but government’s is increasing. The EU AI Act passed, and in recent days both President Biden and Senate Majority Leader Schumer talked up major AI efforts. They seem to recognize that AI is complicated enough that help is needed from industry and outside experts.
It will take months and surely, years, to come up with worthy bills. With the summer here and final essays turned in, if Americans move on from AI and it does not become a driver of the 2024 elections, it can’t be assumed that legislative interest will continue.
Public support is the power behind everything political, and polling is consistently supportive of AI regulation. In May, Data for Progress released a poll fielded in late March. Consistent with a May Reuters/Ipsos poll and a Change Research poll from earlier in the year, Data for Progress found of likely voters:
47% are ‘concerned’ about ChatGPT, with older Americans more concerned;
62% support creating a dedicated federal agency to regulate AI; and
79% support requiring transparency about AI training data.
However, late March may have been the high water mark of public interest in ChatGPT. Google Search Trends data (an imperfect but available measure) shows declining interest in ChatGPT.
ChatGPT had the fastest growth trajectory of any consumer technology in history. A cooldown is inevitable, so take this trend skeptically. AI’s power and potential for massive economic transformation remains. Will legislators stay focused?
In the US
In San Francisco, Biden talks with tech leaders about risks and promises of artificial intelligence (AP)
After meeting with AI industry leaders in DC, President Biden squeezed in a meeting with a group of technologists and academics between fundraisers. Attendees included Tristan Harris of the Center of Humane Technology, Fei-Fei Li of Stanford’s Human-Centered AI Institute, and Jim Steyer from Common Sense Media. Harris and Steyer have been two of the most prominent critics of social media companies, and both are now turning their advocacy efforts to AI.
The administration is obviously eager to show that the president is listening to tech voices across the spectrum of industry, academia, and consumer advocacy groups — but we seen fewer concrete actions yet from the White House.
Schumer launches ‘all hands on deck’ push to regulate AI (Washington Post)
“Don’t count Congress out”, he said:
Schumer unveiled what he called a “new process” for fielding input from industry representatives, researchers and consumer advocates, a system designed to turbocharge lawmakers’ understanding of the complicated area and speed regulation along. In expansive remarks Wednesday, he urged officials to move cautiously to avoid hindering U.S. innovation while deputizing key lawmakers across the chamber to hammer out bipartisan proposals to head off concerns about the technology’s impact on privacy, intellectual property and national security.
Branded as the “SAFE Innovation framework” — there must be an acronym* — Schumer said he’s hoping to have results in “months,” avoiding the typical open hearings in favor of what he calls “AI Insight Forums” beginning in September.
In the House, a bipartisan group including Stanford CS grad Rep. Ted Lieu (D-CA) and “Freedom from Big Tech Caucus” founder Rep. Ken Buck (R-CO) introduced the National AI Commission Act. The act would create an expert panel to explore AI regulation, possible governmental structures, and approaches. Common elements with Schumer’s plan in the Senate include recognition by members that AI’s complexity and development speed will likely require exceptional treatment beyond the typical Congressional process, and that outside help is needed to bring lawmakers up to speed in an area where precious few possess any semblance of expertise.
*SAFE, of course, obviously stands for Safety, Accountability, Foundations, Explain.
Meanwhile:
The Federal Communications Commission and the National Science Foundation announced a workshop next month on “opportunities and challenges of artificial intelligence for communications networks and consumers.”
The FEC — the elections regulator — announced that the use of artificial intelligence in campaign ads is top of the agenda for the Commission’s meeting today.
In Europe
Do Foundation Model Providers Comply with the Draft EU AI Act? (Stanford Human-Centered AI Lab)
…not yet:
Our results demonstrate a striking range in compliance [with the Draft EU AI Act] across model providers: some providers score less than 25% (AI21 Labs, Aleph Alpha, Anthropic) and only one provider scores at least 75% (Hugging Face/BigScience) at present. Even for the highest-scoring providers, there is still significant margin for improvement. This confirms that the Act (if enacted, obeyed, and enforced) would yield significant change to the ecosystem, making substantial progress towards more transparency and accountability.
Google forced to postpone Bard chatbot’s EU launch over privacy concerns (Politico)
The Irish Data Protection Commission said Tuesday that the tech giant had so far provided insufficient information about how its generative AI tool protects Europeans' privacy to justify an EU launch. The Dublin-based authority is Google's main European data supervisor under the bloc's General Data Protection Regulation (GDPR).
Exclusive: OpenAI Lobbied the E.U. to Water Down AI Regulation (TIME)
…OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company… In several cases, OpenAI proposed amendments that were later made to the final text of the E.U. law—which was approved by the European Parliament on June 14, and will now proceed to a final round of negotiations before being finalized as soon as January.
How I’ll help make the AI revolution safe (The Times)
By Ian Hogarth, Chairman of the UK Foundation Model Taskforce, which PM Rishi Sunak announced last week:
And moving fast is so important because this technology is moving exponentially. Every week there are new breakthroughs. We need to move with agility. Modelled on the Vaccine Taskforce, it has the freedom to bring in leading experts from across the world to work out what guardrails need to be in place on this technology.
California
A powerful Democrat emerges from rural California after bitter leadership fight (POLITICO)
POLITICO profiled Robert Rivas, set to become the next California Assembly Speaker. Of interest is Rivas’s co-authorship of AB 331, a maximalist AI bill that would require anyone developing or deploying AI systems to provide reams of paperwork.
…the Assembly speaker is one of the most influential politicians in the state — a role once filled by towering figures like Willie Brown, “the ayatollah of the Assembly,” and Jesse “Big Daddy” Unruh. Speakers can elevate colleagues to influential committees, sideline others, and play an outsize role in which bills make it to the governor’s desk. They also command a multimillion-dollar campaign operation.
Now, with Newsom claiming a national Democratic leadership mantle, 43-year-old Rivas will shape the progressive agenda in one of America’s most influential blue states — if he can unify his party.
Other
Fakery and confusion: Campaigns brace for explosion of AI in 2024 (POLITICO)
“I’m much more worried in the short-term,” said Pat Dennis, president of the Democratic-led opposition research group American Bridge 21st Century, pointing to the fact that “we don’t know what’s coming.” Bad actors will have “an exponentially easier time writing more stuff, flooding the zone,” he said.
AI music that contains ‘no human authorship’ won’t be eligible for a Grammy Award (Music Business Worldwide)
Connecting to the AI authorship term that was a noted aspect of the ongoing Writers’ Guild strike:
The updated rules further stipulated that a work that contains no human authorship is not eligible in any categories for the Awards. However, the Recording Academy noted that a work that features elements of AI material is eligible in applicable categories, although the human authorship component of the work submitted “must be meaningful.”