Descripción
AI Evaluation: Capabilities & Safety (International Programme)
150h
February to May 2026
Online work | Hands-on courses | In-person capstone
Our Mission
Our mission is simple yet urgent: to train the people who will who will determine whether AI is safe and beneficial. This is what AI evaluation is all about.
AI is moving faster than our ability to understand and evaluate its capabilities and risks. Right now, governments, companies, and researchers don’t have a shared language or tools to make sure these systems are safe, reliable, and aligned with human values.
That’s where this programme comes in. The International Programme on AI Evaluation: Capabilities & Safety is here to close that gap. We are creating a new academic discipline and preparing the next generation of specialists with the technical and policy expertise needed to test, monitor, and guide the safe development of AI.
Our Vision
We believe AI evaluation should become a cornerstone of how AI is built and used worldwide; a universal discipline taught in every top university, embedded in every lab and regulatory body, and serving as a safeguard so that advanced AI truly benefits humanity.
This programme is the first step towards establishing the world’s first MSc in AI Evaluation, and building a lasting home for this field – a place where researchers, practitioners, and policymakers can speak the same language and shape the future of AI together.
Why this Programme?
Artificial intelligence is advancing at unprecedented speed. Yet, as frontier AI systems grow more powerful, our ability to evaluate their capabilities and risks has not kept pace.
This programme exists to change that.
-
Talent Gap: AI Safety Institutes and leading labs worldwide face a shortage of experts in evaluation.
-
Unique Approach: We combine technical depth with policy and governance, bridging a gap no other programme fills.
-
Impact Pathway: Our graduates will be equipped to join top research labs, government agencies, and industry, where their skills are urgently needed.
This is the first step toward establishing AI Evaluation & Safety as a formal academic discipline – a foundation for the first MSc in the field. AI is advancing faster than our ability to evaluate it. We are changing that.
This programme brings together 40 exceptional students and professionals from around the world for a 150-hour hybrid course that blends lectures, hands-on labs, and a capstone project week in Valencia.
Fully funded through Open Philanthropy and certified by ValgrAI, it is the first step toward establishing AI Evaluation & Safety as a formal academic discipline.
Who’s Involved?
This programme has the support of faculty from leading universities, including the University of Cambridge, Stanford, Princeton, Beijing Normal University, Renmin University of China, William & Mary, and the Technical University of Valencia.
Confirmed faculty also come from key institutions, research organizations, and companies such as the EU AI Office, the UK AI Safety Institute, CAIS, FAR AI, RAND, Epoch AI, Apollo Research, Redwood Research, Microsoft Research, and Google DeepMind.
Shape the future of AI evaluation.
Apply to join the next cohort.
Get the full picture. Build the missing expertise.
This programme gives you the panorama; the tools, frameworks, and perspectives to connect the dots between machine learning, evaluation, and governance.
You’ll leave ready to:
-
Understand how AI systems are tested and validated.
-
Speak the language of both researchers and policymakers.
-
Join a network building the standards the world will rely on.
In collaboration with…
