The First AI Risk Management Ratings Expose a Two-Speed Industry with Common Shortcomings

October 2, 2024

We produced the first risk management rating on artificial intelligence risk management practices. Our analysis documents significant shortcomings across the industry. In a framework interoperable with global AI regulations and standards like the EU AI Act, ISO/IEC 23894, and the G7 Hiroshima Process, the assessment ranks AI companies’ risk management efforts, exposing that AI companies are united in their shortcomings, but split in two groups - laggards and moderates.

Key Findings:

  • The report ranks companies on a 0-5 scale. Meta, Mistral AI and xAI lag behind at 0.7, 0.1 and 0, i.e. “Very Weak”, indicating an absence of effort to do serious risk management. 
  • Anthropic, OpenAI and Google DeepMind respectively lead with a score of 2.2, 1.6 and 1.5 out of 5, corresponding to “Moderate” and “Weak”, indicating that even the leaders are far from doing adequate risk management.
"These ratings unveil two critical problems in AI risk management. The moderates are much stronger than the laggards. However, both groups still fall short of sufficient risk management practices. We urgently need to raise both the floor and the ceiling of AI industry’s risk management."

Said Simeon Campos, Founder and Executive Director of SaferAI.

The AI risk management ratings aim to enhance accountability among private actors shaping AI development and deployment. By providing a framework for assessing risk management practices, we empowers policymakers, investors, and AI users to make informed decisions about AI adoption and regulation.

"Excited for this project — one of the many important steps we need to take on our way to developing safe AGI."

Comments Florent Crivello, CEO of lindy.ai, an AI startup developing AI employees.

The ratings have also garnered support from AI scientists.

“It's crucial to develop an independent capacity to understand and rank the measures taken by private companies to guarantee the safety of the models they develop and deploy; we can't let them grade their own homework. I strongly encourage the development of initiatives like this one, which aim to improve our collective ability to assess and compare companies' safety approaches.”

Remarked godfather of AI Yoshua Bengio, a pioneering figure in artificial intelligence and deep learning, recipient of the prestigious Turing Award.

The AI risk management research piece at the basis of the ratings offers a roadmap for improvement that goes hand in hand with efforts of companies to comply with voluntary risk management commitments or regulatory requirements. 

"We call on AI companies to view these ratings as an opportunity for growth," said Siméon Campos. "In the coming months, we’ll work on further improving our rating and integrating it in the existing infrastructure of rating users like investors. By working together to raise the bar on AI risk management, we can ensure that technological advancements benefit society while minimizing potential risks."

In the spirit of transparency, we sent these ratings to the assessed companies two weeks prior to publication, providing them with a right of reply. For more information about the AI risk management ratings and our methodology, please visit https://ratings.safer-ai.org.

back to blog
October 2, 2024