Skip to main content

On May 20th, Open AI announced the temporary suspension of Sky, its new voice assistant, following Johansson’s complaint about the use of a voice surprisingly similar to hers. Apparently, Open AI CEO Sam Altman had mentioned being inspired by Spike Jonze’s movie “Her” for generating the new conversational voice assistant for ChatGPT. The actress had previously lent her voice to Samantha, the seductive and non-threatening voice assistant, who Theodore Twombly (played by Joaquin Phoenix) falls in love with. A few months earlier, Johansson had received a call from Sam Altman offering her to be the voice of his new application, an offer she declined. The company claims not to have copied or imitated the actress’s voice but rather hired another professional to use her own natural voice. The lack of evidence has led many users to suspect that Open AI simply trained its new application with the many available oral records of the actress.

The technology capable of generating synthetic voice is not new. However, its perfection has allowed the production of completely AI-generated musical tracks, with the voices of artists who haven’t even performed them in reality, such as the song “Heart on my Sleeve,” which uses the voices of Canadian artists Drake and the Weeknd, whose rights owner attempted, without full success, over a year ago to remove it from platforms. Therefore, Johansson is not the first time lawyers have faced the problem of generating a person’s synthetic voice without proper authorization; nor is it the first time an “analog” imitation of a singer’s voice has been used for commercial purposes (STS of May 30, 1984 (ES:TS:1984:1003)). However, it may be the first time AI “imitates” a famous person’s voice for commercial purposes.

In the US, when the Johansson news broke, work was already underway to reform the 1984 Personal Rights Protection Act to adapt the regulations to the new challenges posed by AI in this area, as it left out the voice as part of individuals’ identifying features. On March 21, 2024, the governor of the State of Tennessee signed what is known as the “Elvis Law,” in honor of singer and actor Elvis Presley, one of the world’s most iconic and extraordinary voices, which will take effect on July 1. The new law extends protection to the voice, defined as “the sound on a medium that is easily identifiable and attributable to a specific person, whether the sound contains the voice or a conversational simulation of the individual’s voice.” Thus, protection now extends not only to the individual’s voice but also to its imitation, granting its owner a civil action against those who, in bad faith, use it without proper authorization for advertising, merchandising, fundraising, donation solicitation, etc.

In Europe, voice and image are protected as personal data (Art. 4.1 of the REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of April 27, 2016). However, its protection may be insufficient against the unauthorized use of a person’s voice when processed in a way that does not allow identification (Recital 26 GDPR.), and (although not without discussion), against imitations. This, in principle, allows its use for training artificial intelligence systems as long as the person cannot be identified (Nor, depending on the doctrine, the record from which the training is produced would be protected by intellectual property rights.), which is precisely what happens with deepfakes. By the way, the recent European Artificial Intelligence Regulation imposes a series of transparency obligations on system providers and those responsible for deploying AI systems that generate deepfakes of real people, things, places, and events, but does not address this issue, rather, it regulates it by ensuring a high level of protection of, among others, the fundamental rights enshrined in the Charter of Fundamental Rights of the EU, which includes personal data (Art. 8) but also freedom of expression (Art. 11) and artistic freedom (Art. 13). Thus, especially in the case of publicly relevant individuals, the classic conflict between freedom of expression (including the freedom to express oneself creatively) and the right to one’s own image will almost always arise.

Additionally, at the national level, voice has specific protection under Organic Law 1/1982, of December 5, on the protection of honor, personal and family privacy, and one’s own image, whose Art. 7.6 considers the unauthorized use “of a person’s name, voice or image for advertising, commercial or similar purposes” illegitimate. This protection, unlike the new US law, would not extend to imitation cases (if the actress hired by Altman imitated Johansson’s voice) to train Sky) nor, of course, similarity (if that actress’s voice naturally resembled the actress’s), although, in either case, the actions could eventually fit into acts of unfair competition.

Conclusion

Cases like Scarlett Johansson’s highlight the inadequacy of our regulatory framework to protect people’s voices (and images), given the current technological ease of generating and manipulating them. Legislative responses, such as the Elvis Law in the United States, can serve as a reference when tackling our future and necessary reform.