We’re writing AI risk management standards at JTC 21, the body in charge of writing the technical details implementing the EU AI Act.
With a focus on large language models and general-purpose AI systems, we want to make sure the EU AI Act covers all the important risks arising from those systems.
We are evaluating AI system developers and deployers to help people distinguish responsible AI from irresponsible AI.
Our goal is to uncover the risks and uncertainties of these systems, and the best existing practices to manage them.
We hold workshops and discussions with key AI actors and policymakers to build consensus and better understand how to foster safe and trustworthy AI.
Through these discussions, we aim to uncover what experts think about the most pressing AI risks, identify possible solutions and translate these learnings into policy.