The lawsuits have arrived
A class-action lawsuit is brought against OpenAI, world leaders continue to discuss the technology, and OpenAI hires a top European lobbyist
Class action lawsuits are gaining steam
Yesterday, a class action lawsuit against OpenAI and Microsoft was filed in California, alleging that the companies stole the private data of hundreds of millions of internet users without their consent, in order to train large language models like GPT-3 and GPT-4. The suit seeks damages from the companies in the form of “data dividends”, which would compensate these users for the value of their data sucked up en masse during LLM training runs.
Sam Altman, Getty Images
This is not the first lawsuit challenging the notion, widely promoted by AI companies, that their massive internet scraping and model training operations are covered under fair use; in January, three artists filed a class action lawsuit against Stability AI, Midjourney and DeviantArt claiming infringement of copyrighted images in the training of text-to-image diffusion models.
Class action lawsuits take years to resolve, and presumably the legitimacy of AI model training under the fair use doctrine will make it to the Supreme Court, which recently ruled against the late artist Andy Warhol in a fair use case, coming to the conclusion that a photograph of the famous musician Prince deserved copyright protection from being repainted and published without compensation.
Still, a fertile environment for litigation is an important precursor for regulatory activity. The threat of billions of dollars in legal damages has a way of clarifying the minds in the C-suite. Instead of deploying lobbyists to fend off burdensome complaints from consumer groups, AI companies may soon change their tone and ask for affirmative regulation as a shield from litigation.
In a divided Congress, that means that the AI companies will legitimately have to come to the table to forge a compromise on AI regulation. And that is when the real sausage will get made.
In the US
President Joe Biden and India's Prime Minister Narendra Modi meet with senior officials and CEOs of American and Indian companies, REUTERS/Evelyn Hockstein
Indian PM Modi wraps up Washington trip with appeal to tech CEOs (Reuters)
Attending a meeting with President Biden and Indian PM Modi were Tim Cook, Sundar Pichai, Satya Nadella, Sam Altman, and others.
New poll sees Americans as cautiously optimistic about AI and strongly supportive of regulation (The Verge)
The animal spirits surrounding AI are decidedly ambivalent but lean, just slightly, toward pessimism.
Last week, we wrote about a Data for Progress poll from late March that showed 45% of likely voters were familiar with ChatGPT. By April, this The Verge poll shows that 57% of their (non-weighted to voting) respondents had. Similarly, they were strongly supportive (76%) of regulation and laws around the development of AI, and relayed in the survey that AI had helped them in tasks but carried tremendous societal uncertainty. Another theme that carried through both the DFP and Verge surveys were the strong age skew of AI usage, with millennials and Gen Z dominating usage patterns in the Verge survey.
The Pentagon’s endless struggle with AI (POLITICO)
Congress is trying to put new pressure on the military, through bills and provisions in the coming National Defense Authorization Act, to get smarter and faster about cutting-edge technology. Defense pundits widely believe the future competitiveness of the U.S. military depends on how quickly it can purchase and field AI and other cutting-edge software to improve intelligence gathering, autonomous weapons, surveillance platforms and robotic vehicles. Without it, rivals could cut into American dominance.
A.I.’s Use in Elections Sets Off a Scramble for Guardrails
The headline implies that AI’s use in elections is something that necessitates guardrails, which is possibly — probably? — but not necessarily true. Most campaigns also haven’t really started creating media yet. It’s a fair bet that much is coming, but it’s early.
What began a few months ago as a slow drip of fund-raising emails and promotional images composed by A.I. for political campaigns has turned into a steady stream of campaign materials created by the technology, rewriting the political playbook for democratic elections around the world.
A.I. May Someday Work Medical Miracles. For Now, It Helps Do Paperwork. (NYT)
The best use for generative A.I. in health care, doctors say, is to ease the heavy burden of documentation that takes them hours a day and contributes to burnout.
Three Ideas for Regulating Generative AI (AI Snake Oil)
Sensible: transparency about the digital supply chain powering generative AI products; transparency about evaluation; and guardrails for open-source models.
In Europe
ChatGPT-maker OpenAI hires European top lobbyist (POLITICO)
Sandro Gianella is joining the company as head of European policy and partnerships… Gianella has had a long career in public policy, working as the European top lobbyist at payment handler Stripe as his last stint. Previously, he also worked in the European public policy teams at Google.
The Vatican Releases Its Own AI Ethics Handbook (Gizmodo)
“The Pope has always had a large view of the world and of humanity, and he believes that technology is a good thing. But as we develop it, it comes time to ask the deeper questions,” Father Brendan told Gizmodo in an interview. “Technology executives from all over Silicon Valley have been coming to me for years and saying, ‘You need to help us, there’s a lot of stuff on the horizon and we aren’t ready.’ The idea was to use the Vatican’s convening power to bring executives from the entire world together.”
Other
The Race to Prevent ‘the Worst Case Scenario for Machine Learning’ (NYT)
OpenAI hired Facebook’s former head of content policy to be their head of Trust and Safety, who quickly saw that early users of DALL-E, the company’s image generator, were attempting to create new images of child sexual abuse to disseminate on the Internet. The article mentions a new paper from the Stanford Internet Observatory that found a “small but meaningful uptick in the amount of photorealistic A.I.-generated child sexual abuse material circulating on the dark web.”
New Approaches For Detecting AI-Generated Profile Photos (LinkedIn)
An increasingly common attack vector for industrial (and regular) espionage is contacting valuable targets on LinkedIn via a fake profile, offering “consulting” gigs to speak about one’s area of professional expertise. The profile photos often look quite real. LinkedIn announced a new detection method that is able to detect ‘99.6% of a common type of AI-generated profile photo.’ The cat and mouse game of AI versus AI.
With the rise of AI-generated synthetic media and text-to-image generated media, fake profiles have grown more sophisticated. And we’ve found that most members are generally unable to visually distinguish real from synthetically-generated faces, and future iterations of synthetic media are likely to contain fewer obvious artifacts, which might show up as slightly distorted facial features. To protect members from inauthentic interactions online, it is important that the forensic community develop reliable techniques to distinguish real from synthetic faces that can operate on large networks with hundreds of millions of daily users, like LinkedIn.
AI Is a Lot of Work (New York Magazine)
A look behind the curtain at the companies and annotators that provide labeled data for machine learning models to train on.
…He was labeling footage for self-driving cars — identifying every vehicle, pedestrian, cyclist, anything a driver needs to be aware of — frame by frame and from every possible camera angle. It’s difficult and repetitive work. A several-second blip of footage took eight hours to annotate, for which Joe was paid about $10.