This week, labor wins big AI concessions in the conclusion to the writers’ strike but loses the fight to ban driverless trucks; a congressional insider offers insight into the fate of active AI bills in Congress; and Meta and Getty release new image generation tools - with limits.
Do you have a friend or colleague who might be interested in subscribing? Forward this over!
Labor and AI draw in California
Organized labor won a major victory in Hollywood with the end of the writers’ strike, and lost a (largely symbolic) fight in Sacramento when Gov. Newsom vetoed a ban on driverless trucks.
In the first major agreement from organized labor that grapples with the potential impacts of generative AI, the writers’ strike was resolved, with AI reportedly the final point to be negotiated. Word is, the humans will stay in charge for now. Under the three-year agreement:
AI cannot write or rewrite literary material and therefore cannot qualify as a writer for credits. Even if AI generates an idea, a writer must be involved in its subsequent development. (The credit ban isn’t much of a concession: AI-generated work cannot currently be copyrighted in the US.);
Writers can choose to use a tool like ChatGPT in their writing, with the studio’s consent;
Studios can’t force writers to use AI tools or adapt AI-generated material;
Studios can use scripts to train AI models; however, writers retain the right to object and negotiate should this cause economic harm.
Writers’ guild board and negotiating committee member, Adam Conover, lauded these terms as very favorable for writers, and an essential deal component to winning the votes of rank-and-file guild members who were furious about the studios’ earlier refusal to discuss AI concerns.
Up Interstate 5, in contrast, unions were dealt a defeat this week when Gov. Newsom vetoed a bill that would ban driverless trucks from public roads in California. This was largely political theater, as the California DMV already bans driverless trucks.
The bill had recently passed the California Senate 36-2, with strong support from the powerful Teamsters union. Like the writers’ rules on AI, this bill was rooted in real fears about job displacement, but with driverless trucks in their infancy, its ratification wouldn’t have had a significant impact today. The governor’s veto allowed individual legislators to vote “pro (human) driver,” in line with labor, while not actually hindering the development of self-driving technology in the state.
Still, the lopsided vote, with the vast majority of both Democratic and Republican senators voting in support, suggests that future legislation that protects human jobs and hinders AI competition will be coming not just in California, but in red and purple states too.
Three Questions For: Howard Waltzman
Three Questions For is a new section in which we ask experts for an inside perspective on what’s happening at the intersection of AI, policy, and politics.
This week we spoke with Howard Waltzman, a partner at Mayer Brown and co-leader of their Public Policy, Regulatory, and Government Affairs Group, to get an inside take on the recent congressional momentum on AI legislation. Mr. Waltzman previously served as Chief Telecommunications and Internet Counsel for the U.S. House Energy and Commerce Committee as well as General Counsel for Sen. Sam Brownback (R-KS).
Alex & Greg: Where is the momentum on federal AI bills coming from right now - genuine concern from electeds, tech lobbying, or constituent demand?
Momentum for this legislation emanates from the rapid evolution of AI, especially generative AI technology. Policymakers, developers, deployers, and consumers are all concerned about the potential societal impact of AI. An additional impetus comes from policymakers’ concern that they were behind the curve in setting guidelines for online platforms.
A & G: We've been tracking several congressional legislative initiatives: a) Protect Elections from Deceptive AI Act; b) REAL Political Ads Act; c) Bipartisan Framework for US AI Act; and d) Transparent Automated Governance Act. Which of these are most likely to progress and eventually get passed, and why?
First, I think it’s important to recognize that these are generally bipartisan initiatives, as are Leader Schumer’s efforts with Sens. Young, Heinrich, and Round, and Sens. Thune and Klobuchar’s impending bill. The bipartisan consensus on the importance of being proactive about establishing a framework governing AI raises the likelihood that Congress acts sooner rather than later.
However, some of these initiatives have overlapping components, so rather than one bill being more likely to progress over another, I think these efforts are going to converge - either at the committee level or on the Senate floor.
I also think the Judiciary Committees are going to have a vigorous debate about the relationship between AI and copyrights.
A & G: What's your read on Sen. Schumer's AI forum initiative? Why are they taking this approach as opposed to letting staff spearhead draft legislation?
Both settings are important and warranted. The AI forum initiative is very beneficial because it offers Senators the opportunity to really learn about AI technology, large language models, and use cases.
AI technology is so complex, and policy issues around it so potentially controversial, that giving Senators the opportunity to ask questions without feeling that they will be criticized for a limited understanding of the technology is beneficial to the legislative process. So is enabling private sector participants to provide frank answers, and for Senators to observe frank exchanges between forum participants.
Getty and Meta release new tools, with caveats
We’re continuing to monitor the evolving strategies tech platforms are employing to label and detect AI-generated content and mitigate mis/disinformation risks (see our previous coverage here and here). This week there are significant updates from Getty Images and Meta.
Getty Images partnered with Nvidia to develop a new product that allows users to generate photorealistic images trained only on their own licensed library of images, avoiding the copyright ambiguity in models like Midjourney which are trained on data scraped from the web. Notably, users aren’t permitted to generate likenesses of real people because Getty doesn’t want to allow users to “manipulate or recreate real-life events”.
For its part, Meta finally started to release consumer-facing AI products, including one feature that allows users to generate images. Instead of restricting the types of images users can create, Meta’s preliminary strategy to combat disinformation involves marking any images with an icon indicating they are AI-generated. Their press release also mentions ongoing experiments “with forms of visible and invisible markers.”
Of Note
Technology
26% of the top 100 websites are now blocking GPTBot (Search Engine Land) Blocking GPTBot now blocks OpenAI from using a site to service user queries (which could lead to additional traffic).
Google adds a switch for publishers to opt out of becoming AI training data (The Verge) Sites can now opt out of training Bard without opting out of search.
The Secret Ingredient of ChatGPT Is Human Advice (The New York Times) New techniques beyond human feedback training will be needed to keep scaling models.
Amazon Takes a Big Stake in the A.I. Start-Up Anthropic (The New York Times)
Microsoft is going nuclear to power its AI ambitions (The Verge)
These 183,000 books are fueling the biggest fight in publishing and tech (The Atlantic)
Elections
Swiss political parties agree to limit use of AI ahead of elections (SWI swissinfo.ch) Five major parties agreed that use of AI on campaigns must be made explicit and barred the use of AI on negative campaigns.
NSA, FBI, and CISA Release Cybersecurity Information Sheet on Deepfake Threats (CISA)
Klobuchar presses Congress to regulate use of AI in elections (Minnpost) Sen. Klobuchar is hoping to pass her bill prohibiting “materially deceptive” generative AI in federal elections by the end of 2023.
Regulation
White House could force cloud companies to disclose AI customers (Semafor) An upcoming Biden Executive Order on AI may include know-your-customer requirements.
Exclusive: VC firms working with D.C. to "self-regulate" AI startup investing (Axios) VCs are building a bloc around representing startups’ interests within AI regulation.
2023 State AI Legislation Summary (BSA The Software Alliance via Axios) State legislatures’ AI bills have increased 440% in 2023, with a tilt towards deepfakes and governmental use of AI.
Canada wants to be the first country to implement AI regulations: Minister of Innovation (VentureBeat)
Europe Has Figured Out How to Tame Big Tech. Can the U.S. Learn Its Tricks? (The Information) “If you ask the Europeans, one secret to their success is that lawmakers in the EU actually do their homework.”