Draft Digital Replica Bill Risks Living Performers’ Rights over AI-Generated Replacements

By Jennifer E. Rothman
October 20, 2023

The NO FAKES ACT released as a discussion draft last week proposes establishing a new federal digital replica right that would extend 70 years after a person’s death. The one-pager accompanying the draft highlights that the legislation is driven by concerns that “unauthorized recreations from generative artificial intelligence (AI)” will substitute for performances by the artists themselves. Unfortunately, this laudable goal is undercut by some of the provisions contained within the current working draft.

I appreciate that this is a discussion draft and that the sponsoring Senators are seeking feedback on it. With that in mind, I want to flag my biggest concerns with the current draft and have prepared a more detailed analysis of its provisions in a separate supporting document.

As currently drafted, this legislation would make things worse for living performers by (1) making it easier for them to lose control of their performance rights, (2) incentivizing the use of deceased performers, and (3) creating conflicts with existing state rights that already apply to both living and deceased individuals. I will consider these concerns, revisit the stated (and unstated) objectives of the legislation, and make some recommendations.

Big Picture:

Existing Protection and the Do-No-Harm Principle. It is essential to note that state right of publicity laws already protect against unauthorized digital performances, and state and federal trademark and unfair competition laws restrict many of the possible uses and promotion of computer-generated performances. Copyright law may also limit some of these new audiovisual works. So, the bar to adding a new layer on top of the existing legal structure should be high and should certainly not make anyone worse off than under the status quo.

Dead People Should Not Replace the Living. The legislation proposes adding a new federal right in dead performers (and more broadly in all dead people’s digital replicas). This will further encourage and incentivize a market in digital replicas of the dead that can displace work for the living by substituting newly created performances by famous deceased celebrities. AI poses a monumental threat to employment opportunities for everyone, including actors and singers. The inclusion of a postmortem provision (at least as drafted) exacerbates rather than protects against such a threat.

Increases Likelihood of Performers Losing Control over Future Performances and of Misinformation. Although the proposed legislation ostensibly seeks to protect individual (living) performers from the harms that flow from substitutionary performances made possible by ever-improving AI technology, the current draft leaves performers potentially worse off by empowering record labels, movie studios, managers, agents, and others to control a person’s performance rights, not just in a particular recording or movie but in any future “computer-generated” contexts.  

As written, the legislation sets up a world in which AI-generated performances will be able to be created without any specific involvement or approval of the individuals (beyond some broad unlimited license) nor any required disclosure that they were AI-generated or that those depicted did not agree to the specific performances. AI-generated performances will be able to portray individuals saying, doing, and singing things they never said, did, or sang. This will exacerbate the dangers of misinformation, false endorsements, and deceptive “recordings,” rather than combat them. As Senator Klobuchar and others have so powerfully warned, such deceptive performances pose one of the greatest current threats to democracy and truth. This draft bill seems to shore them up rather than restrict them.

Broad Liability May Ensnare Consumers. The scope of the draft’s liability provision is vast and based on strict liability (or perhaps even liability without voluntary action). Under its current provisions, general consumer users of AI software could be found liable for violating the digital replica right when they use generative AI programs. This should remind us of the recording industry’s lawsuits against individuals during the days of peer-to-peer file-sharing before music streaming took off.

Objectives of the Legislation

Part of the challenge with the current draft is that it has conflicting objectives. Its stated objective is to protect the rights of performers. Its unstated but implicit objective is to protect those who hold copyrights in sound recordings (i.e. the recording industry). These goals are in tension, at least as currently addressed in the draft. If the primary concerns are those of the recording industry, the bill could tackle them more directly and more narrowly. Alternatively, if the primary goal is to protect the interests of performers, the legislation should have different contours and should engage more with state publicity laws.

Protection of Performers? The one-pager released along with the draft legislation highlights the concern over the computer-generation of performances by real people without their permission. The document points to two recent and high-profile examples. The first is the viral AI-generated song “Heart on My Sleeve,” which imitated the voices of Drake and The Weeknd. The song became a hit—until it was removed from various platforms—and the public initially thought it was a real song by the performers. The second example is the recent AI-generated version of Tom Hanks used without authorization in an advertisement for a dental plan.

Combating such creations—especially when the fabricated performances are those of politicians purportedly making public statements—is undoubtedly a compelling goal. But an important point missing from the one-pager is that Tom Hanks, Drake, and the Weeknd are not left adrift by current law. In fact, they each have slam-dunk lawsuits under state right of publicity laws for the two uses described. Federal trademark, unfair competition, and false advertising laws under the Lanham Act and similar state laws would also provide claims. Having a federal law might send an additional signal and make filing in federal court in a preferred jurisdiction easier, but it is not filling a gap in the law. A broader federal right of publicity law could harmonize and clarify state right of publicity laws and exceptions to them—but this legislation does not do this.

One benefit for performers of the proposed legislation is that it explicitly designates the new performance right as “intellectual property” for purposes of Section 230, which would facilitate the removal of unauthorized performances and the ability to obtain damages for them from online platforms. There is currently a circuit split about whether right of publicity claims fall within the immunity provisions of the Communications Decency Act § 230 or instead fall under the IP exception. This matters because if the exception does not apply, it is difficult to get platforms to take down infringing content. However, the Section 230 problem could be addressed more directly by amending § 230, without need to create a new performance right. Such an amendment would provide far greater clarity and protection for performers and others who currently have their likenesses and performances circulated without permission, including in the context of unauthorized circulation of intimate images.

Protection of the Recording Industry

The bill makes more sense when understood to primarily be addressing the concerns of the record labels about AI-generated songs that might substitute for their legitimate releases. This focus is evident in the explicit extension of the digital replica rights to any person or entity that has an “exclusive personal services” contract with “a sound recording artist as a sound recording artist.” In other words, record companies get (and can enforce) rights to performers’ digital replicas, not just the performers themselves. This opens the door for record labels to cheaply create AI-generated performances, including by dead celebrities, and exploit this lucrative option over more costly performances by living humans, as discussed above. There are many ways to address the recording industry’s concerns, but giving record labels and others a federal right to digital replicas of individual people may not be the best way to do so. Styling the protection this way does provide the rhetorical benefit of harnessing the star power of famous performers, but in reality, the draft legislation appears to be a net liability for performers.

The recording industry has some legitimate concerns, especially when newly generated performances are passed off as authentic ones by artists with whom they have exclusive contracts or that unfairly disrupt the release of their sound recordings. The recording industry, however, should not be able to block nondeceptive computer-generated musical tracks that simply emulate the style (but do not replicate the voice) of known performers and do not cause confusion as to a performer’s involvement with the work. Because the recording industry is not certain of how copyright litigation in this area will play out, it is looking for a quicker and more straightforward fix. But this cure may be worse than the disease.

Some Recommendations Going Forward:

Consider Whether this is the Right Fix for the Problem

If the primary focus is not the inadequacy of current law for performers, but instead the challenges posed by Section 230 and concerns about the effectiveness of copyright law to protect the recording industry, the proposed digital replica right may not be the best way to tackle these concerns.

Instead, amending Section 230 to clarify that state right of publicity and appropriation-based claims may proceed against interactive computer services would help both performers and the recording industry, as well as the broader public.

A more targeted sound recording right also could be drafted that expressly focuses on the recording industry’s concerns without jeopardizing performers and their control over future performances.

Limit to Rights for the Living

A bill purportedly seeking to protect the livelihood of performers is not the proper place to create a novel federal right in dead performers who can substitute for living performers for 70 years after their death. This aspect of the proposed bill seems completely at odds with its stated purposes and instead will shore up reanimated replacements for up-and-coming performers. To the extent that the postmortem provision is driven by the recording industry wanting to protect recordings by deceased artists, it can be done in a more limited fashion.

There may be reasons to provide federal postmortem rights, particularly to harmonize state laws in this area, but doing so requires a much deeper dive into why we are doing so, tackling the variety of state laws in the area, and consideration of who should be able to own and profit from these dead performers’ rights. The postmortem provision has little to do with addressing the problems with AI and will primarily enrich companies that own and manage the rights of dead people while doing little to address the concerns of the living.

In addition to incentivizing the replacement of up-and-coming performers with dead celebrities, it will also (as currently drafted) force the commercialization of deceased celebrities even if that is contrary to their wishes or the wishes of their families, because of how such a new IP right would be treated under current estate tax laws.

Better Protect Performers by Restricting Scope and Duration of Licenses and Add Disclosures

If legislation similar to this draft proceeds, the licensing of performance rights should be far more limited so as not be a subterfuge for long-term, perpetual, or global licenses that are akin to transferring all future rights in a person’s performance to others.

Licenses should not exceed seven years, an outer limit we see in the regulation of personal services contracts. Any licenses involving children should expire when they turn 18, should be reviewed by a court, and any income earned should be held in a trust for them.

A license should only authorize a specific performance or set of performances over which a person has control; this should include control over the words and actions of their computer-generated self. This will have the important bonus of protecting against deceptive performances in which a performer played no role. In the absence of such individualized approval, the law should mandate clear and prominent disclosures that the performances in question were not by the individual nor specifically reviewed or approved by the depicted persons.

Clarify Protections for Free Expression

The First Amendment provides latitude for works in the style or genre of others. So, there should not be liability for works merely in the style of a Taylor Swift or Drake song when there is no confusion as to the use of the performers’ voice or participation, and there is no advertising using their names or likenesses. Allowing such computer-generated works furthers the spirit of “foster[ing] art” and “nurtur[ing] originals” that the proposed Act sets out as its objectives. An easy way to fix this concern with the current draft is to limit claims to situations in which the recording actually sounds like the voice of the person. Disclaimers should not be relevant to such an inquiry.

Some of the current exclusions would benefit from greater clarity, especially around the use of real people in fictional works. The current exclusions do not provide sufficient protection to works outside of the traditional film and television model of audiovisual works, including musical works, video games, and interactive computer programs, including those which are educational in nature and part of school curriculum.

Address Conflicts with Potential Plaintiffs, Existing Rights-Holders, and Licensees

The current draft does not address what happens if an individual approves of a computer-generated recording, but a licensee or holder of a personal services contract for sound recordings does not approve of the use. This suggests that a record label could sue the performer themselves for violating the legislation.

The draft states that it leaves state laws in place but does not clarify what happens to existing licensing agreements that already cover the same rights at issue here and that will conflict with this newly created right.

The draft does not address what happens when someone uses computer-based technology to create authorized derivative works on the basis of copyrighted works in which performers agreed to appear.

If the legislation proceeds, future drafts should provide guidance on how these conflicts should be addressed.

I suspect that if this legislation continues there will be many changes to this initial discussion draft, but in the meantime for those who are interested, I have prepared a more detailed summary of the key provisions of the draft legislation, as well as my analysis of them in a separate document that can be accessed here.