AI's new copyright frontiers
This week, copyright issues take center stage, states announce moves on AI regulation, and NVIDIA skyrockets
Intellectual property developments in art, media, and music
We wrote about copyright dynamics a few weeks ago, and this week brings some notable updates:
Judge rejects AI copyright claim: Last Friday, a federal judge rejected an attempt to copyright an artwork generated by artificial intelligence, citing a lack of precedent for a “nonhuman” to claim copyright. The image in question was created in 2012. Resolution of the conflict between the image’s creator and the US Copyright Office took more than 10 years. This foreshadows the lengthy legal battles ahead for the more recent AI copyright suits.
Hollywood studios value humans over AI, this time: Hollywood studios took a similar approach in their latest counteroffer to the Writers Guild of America in the writers’ strike, proposing that “writers wouldn’t receive less compensation for rewriting work produced by AI or for developing a script based on an AI-generated story,” and that AI could not be credited for content produced.
YouTube <> Universal Music Group partnership: There’s been a partnership brewing between YouTube and music giant UMG which would allow artists to opt-in to allowing consumers to create deepfake songs with artists’ voices in exchange for a revenue share. This week, YouTube officially announced it. The financial relationship between YouTube and music labels is significant: music is 25% of YouTube’s global watch time and in 2021, the music industry made $6 billion of licensing revenue from YouTube.
The deal points include a “Music AI Incubator”, wherein select artists will offer feedback on Google’s AI music-related tools, and the extension of YouTube’s automated copyright detection system to detect infringement from generative AI. The Verge explores what this might look like in a worst-case scenario: extending an imperfect Content ID system to voices, not songs, could imperil everyone from celebrity impersonators to kids trying to rap like Drake.
The New York Times blocked OpenAI’s web crawler: Just days after OpenAI allowed websites to opt out of being included in future training data, the New York Times took them up on the offer. It appears other high-traffic content websites like Amazon.com have also begun blocking OpenAI, though there are still some notable holdouts (Reddit, Washington Post, etc). As we wrote about in this issue, it’s an easy decision for large content holders to block OpenAI’s crawler because there are no consequences for doing so. The same can’t be said for blocking Google’s web crawlers, which would result in websites being excluded from search results, so it remains to be seen how publishers will respond to other platforms’ scraping.
State governments make tentative but notable steps forward on AI legislation
As we wrote about a few weeks ago, state legislatures are starting to make incremental progress on AI legislation. This week there’s some additional action to report, ranging from using generative AI to write a symbolic bill about AI in California to steps Minnesota is taking to criminalize the use of deepfakes to influence elections.
Kansas: Kansas Governor Laura Kelly’s administration created a standard policy for the use of generative AI throughout Kansas’ executive agencies (KFDI)
California: The CA legislature passed a “drafted by AI” resolution reiterating and anchoring the Legislature’s AI efforts to the White House’s AI Bill of Rights, but with nothing concrete so far. (CA State Sen Bill Dodd)
Minnesota: The Secretary of State, who runs Minnesota’s elections, did an interview discussing the danger deep fakes could pose to elections and the new enforcement authority passed by the MN legislature in May that makes it illegal to knowingly influence an election with deepfakes within 90 days of one. (KSTP)
Wisconsin: Gov. Evers creates task force to study AI's effect on Wisconsin workforce (CBS Minnesota)
NVIDIA at the center of the financial world
Chips continue to dominate financial headlines this week. On Monday, the New York Times analyzed NVIDIA’s seemingly untouchable “competitive moat” around AI chips. Yesterday, NVIDIA announced in their earnings that they grew “data center sales” – GPUs for AI training – from $4.3 billion to $10.3 billion, a 2.4x increase in one quarter. Notably for this newsletter, NVIDIA’s CFO warned on the call that further export restrictions on the sale of GPUs to China would result in "a permanent loss of an opportunity for the U.S. industry to compete and lead in one of the world's largest markets.”
Of Note
The U.S. Regulates Cars, Radio and TV. When Will It Regulate A.I.? (The New York Times)
Salesforce creates an acceptable use policy for its artificial intelligence products (Salesforce)
New OpenAI custom GPT 3.5 for businesses includes safety/moderation layer that cannot be disabled (OpenAI) New GPT 3.5 fine tuning model offering has built-in content moderation/safety checks via a GPT-4 layer
UK AI Summit at Bletchley Park - In November, the UK Government will host a global summit on AI regulation at Bletchley Park, the site of the work to breach German encryption during WWII.
Despite Cheating Fears, Schools Repeal ChatGPT Bans (The New York Times)
Responsible AI User Testing of Midjourney's AI-generated Images (Karim Ginena) A former Meta researcher audits Midjourney’s biases across religion, nationality, ethnicity, and more.