Skip to main content

Artificial intelligence (AI) is a technology that has the potential to transform our society and our planet in positive and negative ways. AI can help us solve complex problems, improve education and health, create economic and social opportunities, and protect the environment. But it can also create risks and ethical challenges, such as discrimination, violation of privacy, manipulation, loss of control and liability.

However, AI can generate dangers and general impacts for humanity and structural impacts for society, as well as significantly affect fundamental rights and democratic and constitutional principles.

It is important to reflect on the ethics of AI, that is, on the values and principles that should guide the development and use of AI so that it is beneficial to humanity and respectful of human rights and human dignity. Some examples of ethical dilemmas posed by AI are: how to avoid or minimize bias and stereotyping in algorithms and data? how to ensure self-driving cars make fair and safe decisions in emergency situations? how to recognize and protect the authorship and creativity of AI-generated works? How to ensure that AI systems are transparent, explainable, and auditable? How to regulate the use of AI in the judicial sphere and guarantee the right to due process?

Ethics and Regulation of AI

That is why the development and use of AI must be guided by ethics and regulation so that it is beneficial to humanity and respectful of human rights and human dignity. In this sense, a series of common values and principles have been established to guide the healthy and responsible development of AI, such as respect for cultural diversity, gender equality, solidarity, justice, participation, human well-being, and sustainable development. There are countless international declarations regarding the principles of AI. In any case, initiatives in the European Union and UNESCO stand out. The High-Level Expert Group on Artificial Intelligence of the European Commission, in June 2018, established its Ethical Guidelines for a trustworthy AI, with more than 100 elements to assess whether an AI system meets the “Criteria for a trustworthy system”. From these, tools such as ALTAI have been developed to assess compliance with such criteria.

The development and use of AI must be guided by ethics and regulation, so that it is beneficial to humanity and respectful of human rights and human dignity.

Also, UNESCO has developed the first global ethical framework on AI, which was adopted by its 193 Member States in November 2021. The Recommendation on the Ethics of Artificial Intelligence establishes a series of common values and principles to guide healthy development and responsible for AI. Valgrai members have explained and disseminated the basic elements of this international instrument

Beyond ethics and voluntary and non-mandatory documents, the current regulation in the EU and in Spain is already projected in a concrete way for AI. For example, as long as the AI system processes personal data, which is not uncommon in the various phases of an AI system, the regulations on data protection law apply to artificial intelligence. This already greatly conditions the developments of AI and its uses, and the Spanish Data Protection Agency has already generated some guides on how to comply with the current regulations. This is essential to monitor, not only to avoid harming people’s privacy and data protection but also to prevent an AI project from falling like a house of cards and not being able to continue to be used, as well as to avoid possible sanctions that could become millionaires. In general, the data protection regime for AI must be complied with, especially in the case of specially protected data (art. 9 GDPR). And in many cases, the special guarantees of the right not to be subjected to automated decisions with AI must be applied (art. 22 GDPR). The EU cybersecurity regulations are increasing and can also affect the system in many cases

Also in the EU, the liability of AI systems that cause damage to people or damage to rights is being regulated. And, of course, the future AI Regulation that will be approved in 2023. This regulation for all EU countries, establishes specific requirements for high-risk systems, such as risk management, data quality and governance, documentation, records, precision, and robustness, among others. No investment in AI can fail to take this future regulation into account.

The impact of AI on digital rights and algorithmic non-discrimination must also be taken into account. In a generic way, the Digital Bill of Rights is a document that synthesizes the key elements to be taken into account, including the possible regulation of neuro rights, due to systems that often have AI.

Thus, AI ethics not only remains in a series of increasingly internationally agreed principles but has inspired and guided the mandatory regulation to which providers, developers, and users of AI systems submit, as well as the one that protects those affected by these systems. This is reflected in the new European Union AI Regulation and in assessing the impact of AI in various fields, from privacy and security to digital rights and non-discrimination.

At ValgrAI we are committed to including ethics and values in our teachings and courses (AI for Teachers, AI Content Generation or AI for ICT Professionals), to train responsible professionals committed to the common good. We believe that AI ethics is not just a matter of rules and regulations, but also of attitudes and behaviors. This is achieved by promoting critical thinking, intercultural dialogue, respect for human rights and diversity, and citizen participation. But it is also important that AI professionals are trained in ethics and values and know that a whole regulation is already applied in the EU. The public and private sectors do not have to wait for the future adoption of AI Regulation but must already be attentive to regulatory compliance.