House Hearing on AI Takes Seriously the Dangers of Transferring Rights to a Person’s Voice and Likeness
By Jennifer E. RothmanFebruary 5, 2024
Last Friday, February 2nd, the House Subcommittee on Courts, Intellectual Property, and the Internet considered what, if anything, to do about AI and the unauthorized use of a person's voice and likeness. The hearing was held in Los Angeles to coincide with Grammys weekend. The hearing focused in part on the recently proposed No AI FRAUD Act, which while admirably trying to amplify federal protection against unauthorized AI performances, has some concerning provisions. One of these is that the proposed Act would allow others to own another's person's voice and likeness forever. Any future revisions to the bill should also consider some of the concerns raised in my earlier post on the Act, as well as in my written testimony, including the breadth of liability and its alignment with the First Amendment.
I testified at the hearing, along with (now) Grammy-winner Lainey Wilson, Harvey Mason Jr., the President and CEO of the Recording Academy, and Christopher Mohr, the President of the Software and Information Industry Association.
The live testimony is available online, along with each of our written testimony. My written testimony focused on three primary points:
First, that although the capabilities of today’s consumer-accessible AI are new, the problem of using a person’s performance, voice, or likeness without permission is longstanding. We have many laws already on the books that address such unauthorized uses regardless of the technology employed to do so. Accordingly, any federal legislation in this area must improve the current state of affairs and not create or exacerbate problems produced either by AI or unauthorized uses of a person’s identity.
Second, that any federal right to a person’s voice or likeness must not be transferable away from that person. Allowing such transfers violates our most sacred and fundamental rights and thwarts the expressed goals of Congress to protect the livelihood of performers, the dignity of ordinary people, and the ability to distinguish accurate images and sounds from those which are deceptive creations of highly sophisticated AI. Finally, I suggest in the written document some ways in which federal legislation in the area of the right of publicity could be helpful.
My live testimony and the questioning during the almost two-hour hearing focused on concerns over transferability. I gave the hypothetical example of a world in which Taylor Swift’s first record label obtained rights in perpetuity to young Swift’s voice and likeness. The label could then replicate Swift’s voice, over and over in new songs that she never wrote, and have AI-renditions of her perform and endorse the songs in videos, and even have holograms perform them on tour. In fact, the label would be able to sue Swift herself for violating her own right of publicity if she used her voice and likeness to write and record new songs and publicly perform them. This is the topsy-turvy world that both of the current versions of the No AI FRAUD and NO FAKES Act would create. This does not serve the interests of current and future recording artists, nor of the public more broadly.
Making publicity rights transferable poses a particular risk to student athletes, children, actors, recordings artists, and models. Such transferability also threatens ordinary people who may unwittingly sign over those rights as part of online terms-of-service.
In answer to one question about these concerns, Ms. Wilson noted that when she started out, she would not have had the bargaining power to retain rights to her own voice or likeness. The members of the subcommittee, including two co-sponsors, seemed interested in working to improve the bill to prevent such a chilling prospect. Although Ms. Wilson spoke in favor of the No AI Fraud Act, she repeatedly stated she was not a lawyer, and as the hearing proceeded she seemed more aligned with the concerns that I raised.
Allowing transferability harms not only the person who loses control of their own identity, but all of us. Owners or licensees of another’s identity rights could generate performances by that person forever, making them say and do things they never did. This poses a broad threat to society by undermining trust in authentic communication and seeding misinformation.
Accordingly, any federal rights to a person’s identity must not be transferable, and there must be significant limits on licensing. Without such limits, these proposed laws will exacerbate rather than combat deception and will fail to protect our voices and likenesses.
The hearing left me optimistic that the subcommittee wants to work on many different methods to address the problems of AI, including better enforcement of existing laws, and working with platforms to improve detection and removal of infringing materials. Committee members also seemed interested in revising the two circulated AI & performance bills so that future versions do not strip each of us of our rights to our own voices and likenesses.