AI regulation is going nowhere, fast
No action in Congress, an AI bills fail in the CA assembly, and Japan goes all-in
“A congressman shrugging” -DALL·E
While more high profile researchers and CEOs raise the alarm about possible existential risks of AI, the arms-race dynamic means there’s been little meaningful work on legislative and regulatory checks everywhere except the EU. The congressional hearings with Sam Altman have come and gone, and Congress is now focused on the debt ceiling deal. It’s too early to say whether or not any meaningful regulation will come out of those hearings, but there’s been no action yet. At the state level, the California assembly just killed one of the only proposed AI-related bills of the session (though this is probably a good thing, as it was unnecessarily burdensome), and other states haven’t made much progress either. In the rest of the world, China and Japan are taking very different approaches to regulating AI, but it seems that both countries have decided it’s essential to their economies and are investing heavily in the technologies.
News
A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn (NY Times, 5/30/23)
“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement expected to be released by the Center for AI Safety, a nonprofit organization. The open letter has been signed by more than 350 executives, researchers and engineers working in A.I.
The signatories included top executives from three of the leading A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.
It’s notable that although a number of prominent private-sector executives signed it, this open letter came without pledges to invest time or capital in AI safety.
White House press shop adjusts to proliferation of AI deep fakes (POLITICO, 5/31/23)
The White House press shop has found itself on one of the many front lines of the AI battles. Aides there, who collectively handle hundreds of media inquiries a day, have already been briefed by experts on the potential national security risks posed by images and videos that have been altered using AI, according to an administration official.
Buried in this story is the somewhat alarming anecdote that last week’s fake image of the Pentagon on fire was so convincing that it caused the White House to call the Pentagon and summon the NSC before realizing it wasn’t real.
ChatGPT Risks Divide Biden Administration Over EU’s AI Rules (Bloomberg, 5/30/23)
Some White House and Commerce Department officials support the strong measures proposed by the European Union for AI products such as ChatGPT and Dall-E, people involved in the discussions said. Meanwhile, US national security officials and some in the State Department say aggressively regulating this nascent technology will put the nation at a competitive disadvantage, according to the people, who asked not to be identified because the information isn’t public.
California legislature kills burdensome AI bill (The Sacramento Bee, 5/19/23)
Assemblywoman Rebecca Bauer-Kahan, D-Orinda, set out this year with a goal of protecting Californians from bias as artificial intelligence is increasingly used to make decisions around health care, housing and employment.
AB 331, authored by Bauer-Kahan, called for prohibiting the use of any “automated decision tool” — a system or service that uses artificial intelligence to make decisions — that results in discrimination and mandates that developers and users of such tools conduct an impact assessment
But on Thursday, the measure became one of dozens that were quietly killed for the year. The Assembly and Senate Appropriations Committees went through nearly 1,200 bills, including AB 331, to weed out any measures deemed too expensive, overly cumbersome, unnecessary or politically inconvenient.
Federal judge: No AI in my courtroom unless a human verifies its accuracy (Ars Technica, 5/31/23)
A federal judge in Texas has a new rule for lawyers in his courtroom: No submissions written by artificial intelligence unless the AI's output is checked by a human. US District Judge Brantley Starr also ordered lawyers to file certificates attesting that their filings are either written or reviewed by humans.
"All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being," according to a new "judge-specific requirement" in Starr's courtroom.
China Is Flirting With AI Catastrophe (Foreign Affairs, 5/30/23)
The Zhongguancun National Innovation Demonstration Zone Exhibition Center in Beijing, February 2022.
Image source: Foreign Affairs
According to an Ipsos survey published in 2022, only 35 percent of Americans believe that AI’s benefits outweigh its risks, making the United States among the most pessimistic countries in the world about the technology’s promise. Surveys of engineers in American AI labs suggest that they may be, if anything, more safety-conscious than the broader public… …China, by contrast, ranks as the most optimistic country in the world when it comes to AI, with nearly four out of five Chinese nationals professing faith in its benefits over its risks. Whereas the United States government and Silicon Valley are many years into a backlash against a “move fast and break things” mentality, China’s tech companies and government still pride themselves on embracing that ethos. Chinese technology leaders are enthusiastic about their government’s willingness to live with AI risks that, in the words of veteran AI expert and Chinese technology executive Kai-Fu Lee, would “scare away risk-sensitive American politicians.”
Japan goes all in on AI (Technomancers.ai, 5/30/23)
In a surprising move, Japan’s government recently reaffirmed that it will not enforce copyrights on data used in AI training. The policy allows AI to use any data “regardless of whether it is for non-profit or commercial purposes, whether it is an act other than reproduction, or whether it is content obtained from illegal sites or otherwise.” Keiko Nagaoka, Japanese Minister of Education, Culture, Sports, Science, and Technology, confirmed the bold stance to local meeting, saying that Japan’s laws won’t protect copyrighted materials used in AI datasets.
UAE developing powerful large language models (Twittter, 5/31/23)
Ranked #1 globally on Hugging Face’s leaderboard for large language models (LLMs), Falcon 40B outperforms competitors like Meta’s LLaMA and Stability AI’s StableLM. Under the permissive Apache 2.0 software license, Falcon 40B end-users have access to any patent covered by the software in question. Apache 2.0 ensures the security and availability of safe and powerful open-source software and establishes an effective governance model.