The Growing Challenge of Protecting Name and Likeness from AI Manipulation

The surge in AI technology has dramatically expanded the arena of Name, Image, and Likeness (NIL) rights, creating both new opportunities and complex legal challenges. As AI-driven tools become more sophisticated, the ability to replicate or manipulate a person’s identity, be it their voice, face, or overall persona, raises pressing questions about ownership, control, and consent.

Traditionally, NIL rights have protected individuals from unauthorized use of their personal attributes for commercial gain. These rights, rooted in “right of publicity” laws, grant individuals the ability to control how their identity is exploited, ensuring they can profit from and prevent misuse of their persona. However, the advent of AI especially deepfake technology complicates this landscape. AI models can now generate hyper-realistic replicas of celebrities’ voices and images with minimal data input, often without their knowledge or approval. This creates risks not only of unauthorized commercial use but also of reputational harm, often referred to as deepfake fraud.

One of the core concerns is that AI can train on extensive amounts of publicly available data, images, videos, and audio clips, often obtained without explicit permission. This means AI models might be able to produce convincing representations of public figures, even if those individuals have not consented to such use. For celebrities, AI-generated replicas of their voice or likeness could be used in marketing, entertainment, or even misinformation campaigns, all without remuneration or control from the individual themselves. Laws around right of publicity vary by jurisdiction; some states like California actively protect against all forms of unauthorized use, while others place limitations on creative or parody uses. ​

What makes the situation more complex is the vague scope of what constitutes NIL in the AI context. Legal experts acknowledge that the traditional protections for NIL, such as the right to control the commercial exploitation of one’s name or image, may need to evolve or be reinforced to address AI’s capabilities properly. For example, a celebrity’s voice could be used to read advertisements or give testimonials, blurring the line between authorized endorsements and unauthorized deepfakes. The risk is not just financial but also reputational, as AI could create content that damages an individual’s brand or misleads the public about their opinions.

Legal scholars and policymakers are actively debating how existing laws can adapt to these technological shifts. The US Patent and Trademark Office (USPTO) and other agencies have held roundtables to explore these issues, emphasizing the importance of establishing protocols for tracking and verifying the provenance of AI-generated content. The goal is to develop mechanisms that can identify unauthorized AI-produced NIL content and deter its misuse, although current detection capabilities remain limited.

For celebrities and public figures, there is growing consensus that their NIL rights require stronger legal protections against AI abuse. The use of AI to generate near-perfect replicas of voices or images without compensation or consent undermines their ability to profit and maintain control over their personal brand. And because these rights are state law based, laws in some jurisdictions get more comprehensive protections than others. For instance, California’s laws outlaw nearly all unauthorized uses of NIL, including AI-generated content, while others might restrict protections to specific contexts like advertising.

As AI continues to evolve, so too will the tactics for defending NIL rights. The potential for AI to produce highly realistic, customizable replicas of individuals increases the urgency for clearer regulation. New licensing models and legal frameworks are likely needed to ensure that AI-generated replicas are used ethically and with proper authorizations. Athletes, actors, and public figures are already seeing the impact of this shift, with some pursuing legal action over unauthorized AI uses of their likenesses and voices. The development of AI-specific protections could serve as a vital tool in safeguarding personal rights in this digital age, establishing a new standard for responsible innovation and respect for individual identity.

For now, the core takeaway is this: the rapid growth of AI technology has made the fight over NIL rights more urgent and more complicated than ever. As the boundaries of what AI can do expand, so must the legal protections that help individuals maintain control over their personal identities in the digital realm. Whether through legislation, licensing, or technological safeguards, protecting personal NIL from AI misuse is becoming a defining issue in the digital economy.

Related posts