Striving to ensure that AI technologies are safe

SaferAI is a governance and research non-profit focused on AI risk management. Our organisation, based in France, works to incentivize responsible AI practices through policy recommendations, research, and innovative risk assessment tools.

More about us

Our focus areas

Standards & Governance

With a focus on large language models and general-purpose AI systems, we want to make sure the EU AI Act covers all the important risks arising from those systems. We are writing AI risk management standards at JTC 21, the body in charge of writing the technical details implementing the EU AI Act. Additionally, we are participating in all four working groups of the Code of Practice for general-purpose AI models.

We are also doing comparable work at NIST US AISIC and in the OECD G7 Hiroshima Process taskforce.

Golden statue of Lady Justice against a blue sky backdrop. She is depicted as blindfolded, holding a balanced scale in one hand and a sword pointing towards the sky in the othe

Ratings

We rate frontier AI companies' risk management practices.

Our objective is to enhance the accountability of private actors shaping the development of AI when developing and deploying their systems.

You can find here our website with the complete results.

Stock image of two professionals in shirts working in an office. One is typing on a laptop, while the other is presenting a chart on a piece of paper.

Research

We are conducting research on AI risk management, applying existing knowledge from other domains to AI. Our current focus is to make quantitative risk assessment (QRA), i.e., the quantification of the likelihood and severity of harmful potential events.

We’re therefore developing this methodology,  and applying it to harms induced by cyberoffensive LLM capabilities.