Understanding Generative AI Risks for Hollywood Is the First Step Toward Beneficial Adoption

Illustration of Caution Tape spelling out AI
Illustration: Variety VIP+

Note: This article relates to the VIP+ special report “Generative AI & Entertainment Part 2,” for subscribers only.

Generative AI poses significant risks for the film and television industry on multiple fronts. It may be instructive to understand that risks will arise both within companies and externally as AI tools and model source code become accessible to the public.

Mitigating problematic use will emerge through clear solutions spearheaded, developed and practiced by regulators, courts, lawyers and AI companies. Without more clarity and safeguards being established, some risk-based considerations — whether legal, ethical or technical — are likely to limit or forestall fuller adoption of generative AI in the entertainment industry.

In VIP+’s October special report “Generative AI & Entertainment Part 2,” we discuss several major risk areas for the industry that will need substantive solutions moving forward:

Worker Protections: Union and individual talent contracts will evolve to consider what uses of generative AI are deemed acceptable. Beyond union agreements that apply to the collective, individual talent contract language around generative AI may standardize to some degree. Individual talent contracts will also demand specific attention, negotiation, approvals and changes, depending on the talent’s own comfort levels with specific uses of AI; the needs of a particular project; and even compliance with developing state, federal and international law.

Copyright & AI Training: For the creative community, the source of AI training data is a chief frustration with generative AI. Publishers and artists have publicly argued that AI’s use of their data for model training infringes on copyright law, occasionally through litigation.

Over the last year, nearly all of the roughly dozen lawsuits brought against generative AI developers OpenAI, Meta, Alphabet, Stability AI and Microsoft-owned Github frame their complaints as copyright infringement and unfair competition, notably two class-action suits brought by such authors as Sarah Silverman (Kadrey v. Meta Platforms and Tremblay v. OpenAI).

Some have been perplexed that, as major IP rightsholders and content sellers, studios themselves haven’t contended more directly with IP being used to train AI systems, particularly as licensing negotiations and deals with AI companies or restrictions of web scraping have emerged at other media publishers.

Despite the wave of model open sourcing in the AI community, AI researchers don’t reveal the data ingested by AI models. Even without this transparency, it’s reasonably suspected or close to proven that AI models have trained on copyrighted works that are among massive data sets of publicly available text, image and video data scraped from the web. AI researchers argue that training models on such data is “fair use,” the exception under U.S. copyright law that allows for certain uses of copyrighted material without permission. But that contention is still in question.

Copyright & AI-Generated Material: Fair use concerns have also raised questions about whether AI-generated outputs themselves infringe copyright, if copyrighted data was used to train models that produced it. So far, lawsuits alleging copyright infringement have been brought against the AI companies, not end users of AI systems. However, until this question is settled, a possibility exists that content creators or companies themselves could become liable.

Meanwhile, generative AI outputs with no constructive human creativity cannot be copyrighted in the U.S. Official guidance from the U.S. Copyright Office issued in March 2023, upheld by a judge in August, articulated that AI-generated outputs without “sufficiently” creative modification or rearrangement cannot be protected under copyright. While AI-assisted works can still be copyrighted, the Copyright Office’s de minimus level of human contribution remains undefined and would be subjectively decided, posing a non-zero risk of an AI-assisted work being rejected.

Unauthorized Use & Nonconsensual Deepfakes: As generative AI models and software become more accessible to anyone, this also means they can be misused to misappropriate IP or talent likenesses and identities. Experts have long warned about the risks of deepfakes to individuals and society. But with generative AI tools, these can occur much more easily, at a quality almost indistinguishable from reality and at a much larger scale.

For media companies, IP infringement is considered a main risk if users or entities create and attempt to commercialize an AI work that exploits studio IP. Meanwhile, talent will need protections and remedies against nonconsensual deepfakes using or compromising their visual or voice likeness in images, videos or audio content that could surface as pornography, ads, promotions, scams and more.

Now, dig into the VIP+ subscriber report ...

Read the Report