Skip to main content

How is generative artificial intelligence challenging the traditional frameworks of intellectual property and authorship?

What ethical and legal obligations do academics and scientists have when using generative AI? What policies should scientific and academic institutions establish?

What initiative has Congress taken regarding the use of AI in the cultural field, particularly in awards and grants?


Generative artificial intelligence – such as ChatGPT, Copilot, Perplexity, Gemini, Dall-E, StableDiffusion, Midjourney, and many others – is revolutionizing numerous sectors, including those of educational, academic, and scientific interest to ValgrAI. It poses many legal and ethical challenges, particularly concerning intellectual property, which we commemorate on World Intellectual Property Day. This sector is one in which generative AI truly shakes the very foundations and traditional frameworks of intellectual property and authorship. These are some of the most significant legal problems it raises. In principle, AI creations are not human, and current legislation assumes that a work protected by copyright requires a human author. While this premise has already settled in Europe and the USA, exceptionally a court in China considers that a text written by artificial intelligence is protected by copyright. Additionally, generative AI systems are trained with vast amounts of data, generally protected by intellectual property and sui generis rights that protect databases. It is very likely that in general, these large systems have learned by infringing these rights and will have to demonstrate otherwise. Also, and it is particularly important to remember today, humans who use generative AI to generate content must comply with important legal obligations and ethical duties, and it is by no means clear to what extent their use of instructions and prompts may be protected in any way.

Generative AI has also reached research and academia. Researchers and journals of theoretical prestige include contents generated by ChatGPT without citing it and do not even bother to disguise it, as in this grotesque case where AI has written the introduction and they did not bother to revise it (“Certainly, here is a possible introduction for your topic:”)



In response, ethical, legal, and responsible academic and research practices are being forged when using generative AI tools. In addition to some university guidelines, I highlight today the recommendations of the European Commission for researchers and institutions dated March 20, 2024, like ValgrAI.

Researchers must be transparent regarding the use of AI in their research processes, specifically mentioning the tools used, the versions, and how they influenced the research outcome. These guidelines emphasize that researchers must be fully responsible for the scientific content generated with the aid of AI. This includes maintaining a critical approach and ensuring the integrity of the data and results presented. It is also insisted on the necessity of their transparency in the use of AI and that authors and researchers must detail the use of generative AI tools in the research process, including the type of tool, version, and how these influenced the work carried out. They must consider that AI generates results that can vary by chance even from the same data inputs. It is also emphasized that researchers should be cautious with the privacy and confidentiality of the data they input, especially when handling sensitive or protected data. In this line and in general, it is recommended to avoid the use of generative AI in particularly sensitive research fields, as well as activities such as the customary review we do of research projects or scientific publications. And in general, it is recommended that everyone be aware of the changing legislation on these issues and the need for continuous training regarding generative AI, staying up to date with best practices and technological evolution. This is about fostering an ethical and responsible academic and research environment under the principles of scientific integrity, reliability and trust, honesty, responsibility and accountability, and respect for the scientific community and society.

Regarding scientific institutions, such as ValgrAI, it is reminded that they must establish clear policies on the use of AI, including training and monitoring to prevent abuses.

And on World Intellectual Property Day, pending further regulatory advances in the field, I also wish to recall the approval by the Spanish Congress of Deputies of a Non-Law Proposition presented by the Socialist Parliamentary Group on the use of artificial intelligence (AI) in the cultural and creative field, especially in public calls.

In this regard, it is worth remembering cases like that of Théâtre D’opéra Spatial, a work created entirely by Midjourney and awarded at the Colorado State Fair, in the United States. Closer to home, there is controversy because the Ministry of Youth distributed on social networks images of Disney characters that apparently had been created with artificial intelligence.

This proposition points out the need to regulate the use of AI in the creation of audiovisual and graphic content, specifically to ensure that rights are not violated in public competitions. It also insists that generative AI creations are not human and cannot be considered original and may violate copyright. It urges the government to develop a document of best practices and suggests the creation of a regulatory agency or the adoption of an ethical quality seal in these cultural contexts. Finally, it calls on all Administrations to support creators and artists.

I conclude finally recalling that apparently there is already a Guide from the Ministry of Culture, which unfortunately is not available, only the information note of this Guide “on Good Practices Relating to the Use of Artificial Intelligence Systems in the Field of the Ministry of Culture”. According to what we are told, this guide calls for respecting the rights of the authors, the integration of AI as a support tool, but without replacing creation, and that in public contracting and grants, human creativity will be prioritized, even in the guide it advocates for the exclusion of works created entirely by AI.


Lorenzo Cotino Hueso, ValgrAI, Professor of Constitutional Law at the University of Valencia.

Note: to produce this content, the author has used ChatGPT as support in reviewing the drafting.