A Discussion of the Part 1 of the U.S. Copyright Office Report on Copyright and Artificial Intelligence

Discussing artificial intelligence as interpreted by the US copyright office in reference to the Lanham Act, NO Fakes Act, and other laws about deepfakes and digital replica.

A Discussion of the Part 1 of the U.S. Copyright Office Report on Copyright and Artificial Intelligence

Written by: Eric Goldman

The U.S. Copyright Office is actively engaged in monitoring the impact artificial intelligence (AI) systems are having on intellectual property, recognizing the traditionally symbiotic relationship between copyright and technology. In early 2023, the Copyright Office announced an initiative to explore the intersection of copyright and AI. A Notice of Inquiry (NOI) was issued in August of 2023 seeking comments, and more than 10,000 comments were submitted in response.

The result will be a formal report from the Copyright Office on copyright and AI. In July, Part 1 of that report was published, addressing the issue of AI-generated digital replicas, or deepfakes. The report ultimately recommends new federal legislation be enacted to provide protections against the unauthorized distribution of AI-generated digital replicas.  

Discussion of Existing Protection

The Report begins by examining the existing protections available to claimants who experience the unauthorized distribution of AI-generated digital replicas, or deepfakes, beginning with a review of current state laws. These laws generally are based in theories involving the rights of privacy and publicity. 

The right of privacy, recognized by most states, protects against unreasonable intrusions into people’s private lives and can be used to safeguard people’s autonomy, dignity and integrity. These laws typically only apply to the living. A claim of a false light invasion of privacy alleges that someone has been placed before the public in an unflattering way by the public disclosure of private facts in a manner that a reasonable person would find highly offensive, or with a reckless disregard for the truth. A claim of invasion of privacy by appropriation alleges the use of someone’s name or likeness for the financial or professional gain of another.

The right of publicity, also recognized by most states, involves the use of someone’s persona in a commercial context. Some states provide protection under the right of publicity for the deceased. There are some First Amendment checks on the right of publicity, and carveouts are generally provided to allow for free speech.

The report noted that some states have begun to provide protections against unauthorized AI-generated digital replicas. Generally, these laws also provide First Amendment protections for free speech.

The Report then turned to a discussion of currently available remedies available under federal law, beginning with the U.S. Copyright Act. Copyright law does provide protection against the use of copyright-protected images in AI-generated deepfakes. However, copyright does not protect identities.

The Federal Trade Commission Act protects against unfair methods of competition, and against unfair or deceptive trade practices, as they pertain to commerce. The FTC maintains that the Act can reach the use of AI to mimic an individual’s voice or likeness, especially in instances where consumers may be deceived, or where the value of someone’s reputation or work is diminished. The FTC is actively engaged in examining the challenges presented by the use of AI-generated digital replicas in commerce. 

The FTC has also issued a new rule on government and business impersonation, commonly referred to as the Impersonation Rule. This rule specifies that it is an unfair or deceptive trade practice to materially and falsely pose as a government official, business or business official.

The Lanham Act primarily focuses on trademarks and service marks, but does also address unfair competition. The unauthorized use of AI-generated digital replicas would constitute an act of false endorsement under the Lanham Act. However, a successful Lanham Act claim requires commercial use and proof of a likelihood of confusion, mistake or deceit. Notably, so-called “revenge porn” would fall outside the scope of the Lanham Act.

The Report then turned to a discussion of the Communications Act. While, as of now, the Federal Communications Commission does not regulate the use of AI-generated digital replicas, it has begun the process of generating such regulations and authorizing state Attorneys General to do so as well. The FCC has adopted a declaratory ruling that the use of voice cloning technology in robocall scams is illegal.

The Need For New Federal Legislation

Based on its review of the available state and federal remedies, the Copyright Office has concluded that there is a need for new federal regulation addressing AI-generated deepfakes. State laws are inconsistent with one another, providing varying levels of protection. For example, multiple states require a showing that an individual’s identity has commercial value for a successful claim; also, not all states protect voices, and those that do provide different levels of protection. Most importantly, most state laws are limited to instances where infringement occurs in advertising, on merchandise or in other commercial contexts. Also, some states provide exceptions that go beyond First Amendment concerns. The result is what has been described as a patchwork of protections at best.

Existing federal laws are too narrowly drawn to be an effective tool with which to address the challenges presented by AI-generated digital replicas. Copyright does not address situations where the person whose voice or image is being appropriated does not own the copyright in the image or recording being used. Both the FTC Act and the Lanham Act require use in commerce, and many uses of AI-generated deepfakes are not commercial in nature. Lastly, the jurisdiction of the FCC is limited to common carrier services, transmissions and cable services. 

The Report noted that Congress has begun to address the issues presented by AI-generated digital replicas. The Preventing Deepfakes of Intimate Images Act would criminalize the intentional disclosure of, or threat to disclose, AI-generated intimate images; the REAL Political Advertisements Act would require political advertisements to disclose the use of AI-generated sounds or images; and the Protect Elections from Deceptive AI Act would criminalize the distribution of deceptive AI-generated media related to federal elections. 

The No AI FRAUD Act, introduced in 2024, would establish a new form of intellectual property in voices and likenesses, and protect against the unauthorized use of that intellectual property in AI-generated digital replicas. Rights would endure for a period of at least 10 years after death. Any authorization to use the intellectual property would require a writing and representation by legal counsel. There are protections provided for minors, and a list of First Amendment factors to be considered by courts. Potential remedies would include statutory or actual damages, lost profits, punitive damages and attorneys’ fees. The law would not preempt any remedies available under any state law.

The NO FAKES (Nurture Originals, Foster Art and Keep Entertainment Safe) Act, introduced in 2024, would create a right to authorize the use of an image, likeness or voice in an AI-generated digital replica. That right would exist for a period equal to the life of the individual in question plus seventy years, be licensable and be inheritable. The draft also imposes liability for knowingly producing and disseminating an AI-generated digital replica without authorization. Exceptions are included to address First Amendment concerns and incidental, or de minimus, uses. Remedies would include actual or statutory damages, punitive damages and attorneys’ fees. 

Recommended Legislation 

Taking all of the foregoing into account, the Copyright Office is recommending that Congress enact new legislation addressing the challenges of AI-generated deepfakes. After reviewing all of the comments received in response to its NOI, the Copyright Office has determined that said legislation should address the following issues and concerns.

  1. Subject Matter. The Copyright Office has defined a “digital replica” as “a video, image or audio recording that has been digitally created or manipulated to realistically but falsely depict an individual.” 
  1. Persons Protected. One of the factors treated differently by the various states is whether or not rights of publicity are protected for everyone, or just for famous people. The Copyright Office has determined that any new federal legislation should provide protection without regard to the level of fame of the individual seeking redress – the protection should be for all.
  1. Term of Protection. The Copyright Office acknowledged that there does not appear to be consensus as to whether any new federal protection should survive after a person’s death. At a minimum, the Office recommends:

“A federal digital replica right should prioritize the protection of the livelihoods of working artists, the dignity of living persons, and the security of the public from fraud and misinformation regarding current events. For these purposes, a postmortem term is not necessary.”

At the same time, the Copyright Office recognizes that an argument can be made for post-mortem protections. Should such protections be incorporated into any new law, the Office recommends a post-mortem term of less than 20 years, with possible extensions if the right is actually being exploited.

  1. Infringing Acts. The Copyright Office draws a distinction between the creation of an AI-generated deepfake and the distribution of that deepfake. Certainly, an AI-generated deepfake could be created for the purposes of an artist’s experimentation or the creator’s personal entertainment, and the Office does not necessarily view this as causing sufficient harm to warrant a remedy. Rather, it is the dissemination of the AI-generated deepfake that should be the basis of liability. As the Office recognizes that harm can be generated by both commercial and non-commercial distribution of AI-generated digital replicas, remedies should not be limited to claims arising from the commercial distribution of such deepfakes. 

However, the Office recommends limiting liability to those with actual knowledge that they are distributing an AI-generated deepfake, exempting those who distribute social media posts and other materials that contain an undisclosed AI-generated digital replica. The Office also counseled against adopting an “intent to deceive” standard of liability, as sometimes the intention is to harass or ridicule rather than to deceive. Finally, the Office recommends the establishment of a notice and takedown procedure similar to that established under the Digital Millenium Copyright Act to protect online service providers from liability.

  1. Licensing and Assignment. The Copyright Office recommends that people be able to license the right to use their voices and likenesses in AI-generated digital replicas, but not to assign such rights outright. The potential for abuse in a perpetual assignment of the right to create AI-generated digital replicas is simply too great, especially in the case of minors. In addition, the permitted terms of such licenses should be limited to 5- or 10-year terms. Upon the expiration of the term, licenses granted during the term could continue, but the right itself would revert to the duplicated individual. In addition, consent must be informed and conspicuous, as opposed to being included in standard terms and conditions without adequate disclosure.
  1. First Amendment Concerns. Certainly, any legislation addressing AI-generated deepfakes will have to address First Amendment concerns, such as uses in news reporting, artistic works, parody and political opinion. However, there does not appear to be consensus on how such concerns should be addressed in any new legislation. The Copyright Office recommends the establishment of a balancing framework, similar to the one applied to fair use defenses in copyright infringement claims.
  1. Remedies. The recommendation of the Copyright Office is that remedies include both monetary and injunctive relief. Also, in addition to actual damages, statutory damages and attorneys’ fees should be available, as some claims under the new law would not present realistic opportunities for financial recoveries.
  1. Preemption. The Copyright Office takes the view that any new federal law should operate as a floor, and not a ceiling, leaving room for the individual states to provide greater protections than are available at the federal level. With that in mind, the Office recommends against preempting state laws in the enactment of any new federal remedy.

It remains to be seen whether or not Congress will follow the recommendations of the Copyright Office with respect to how best to address the issues presented by AI-generated deepfakes. At the very least, though, this comprehensive and inclusive report does us all a great service by summarizing the existing challenges and ranges of opinions on best practices. 

Conclusions

The lawyers at IBL are thought leaders on issues pertaining to AI and will continue to monitor legislative developments addressing this technology. We will continue to provide discussions of future Copyright Office reports on AI.

Whether you are someone who has been digitally replicated, the creator of AI-generated images looking to safely and legally distribute your work, or anyone who has been impacted by the emergence of this technology, the lawyers at IBL are here to help. Please reach out to schedule a consultation today.

More articles

SUBSCRIBE

Industria Business Lawyers - Where Industry Meets Legal.

Get legal news, updates and comment delivered to your inbox.