We Need to Avert an AI Safety Winter - SaferAI’s commentary published by RUSI

March 11, 2025

Originally published on Rusi.org 

The third AI Summit in Paris (February 10-11th) is markedly different from its Seoul and Bletchley Park predecessors. A successful fundraiser, the summit was an occasion for President Macron to present his strategy for a third way in AI governance, beyond American and/or Chinese leadership. While previous summits maintained a tight focus on safety and a limited participation, France positioned the event as AI’s equivalent of environmental policy’s Conference of the Parties. The summit expanded scope to include 100 countries for four days of preliminary scientific and cultural activities and a programme of side-events accommodating every stakeholder’s taste.

Hosted in Paris’s iconic Grand Palais, with banners promoting “AI Science, not Science Fiction” adorning the main hall, the French Summit nonetheless sidelined the 100 scientists who had agreed in Seoul, to deliver an International AI Safety Report that summarised the scientific consensus on the risks posed by AI. The summit programme excluded follow-up from companies which had committed in Seoul to publish safety frameworks in time for Paris. The French summit downplayed “exaggerated anxieties” about AI risks and departed from the consensus-building efforts of previous summits.

The Sun Hangs Low for AI Safety

Mission accomplished: the French Summit was a setback for global AI risk management. But as explained by some of the most prominent AI scientists, now is not the time to pause safety research and governance efforts. The Paris Summit happened at a time when AI capabilities are advancing rapidly across many domains, when developers themselves warn of risks of misuse and accidents and when mounting research results suggest we are far from able to reliably control AI systems’ behaviour. Early 2025 also fired the starting gun for an international AI investment race that is likely to heighten the risks. OpenAI announced its $500 billion Stargate project; the Bank of China pitched $137 billion for AI infrastructure; at the Summit, France committed $112 billion; European companies launched a €150 billion EU AI Champions initiative; and EU Commission President von der Leyen announced a €58 billion combined InvestAI and "AI Factories" program, declaring "the AI race is far from over."

In this context, an “AI safety winter” must be avoided. Despite the grim climate, recent developments offer potential paths forward. Firstly, while political momentum waned in the main hall, Summit side events demonstrated progress in technical and governance research on AI risk management over the past year.

Secondly, there are positive developments in the European Union: the first drafts of the General Purpose AI Code of Practice contained promising advances towards sound risk management procedures for frontier AI; von der Leyen also proposed a CERN-like European AI Research Council in her political guidelines which could enhance AI safety research. 

Thirdly, the UK has refused to sign the summit declaration, citing insufficient practical clarity on global governance and national safety concerns. Coupled with its continued investment in sound AI governance endeavours – e.g. through the UK AI Security Institute and ARIA program – this can be interpreted as willingness to defend the Bletchley legacy and to leverage its safety expertise as a competitive advantage in the AI race

These parallel complementary developments suggest potential for a UK-EU "AI risk management bloc" with both entities positioning themselves as "safe AI champions.” While neither the EU nor UK can solve international AI governance alone, their potential collaboration – grounded in scientific risk assessment – brightens future prospects.

Lessons from other geostrategic dual-use technologies

In practice, several options exist to reroute international AI diplomacy towards responsible AI governance. As suggested in a recent Oxford report, one option would be for the next summits to be organised in two tracks: one focused on frontier AI governance, continuing the Bletchley legacy, the other building on the Paris legacy to explore advanced AI opportunities for the public interest among a larger set of stakeholders. Should that first option fail, there may be a need for a new forum to move international AI risk management forward. Here, inspiration may be drawn from a historical precedent with another dual-use nature technology, namely nuclear.

In the early days of international nuclear governance, conferences, most prominently the Geneva conference in 1955, were organised with an emphasis on sharing technical knowledge to enable the safe deployment of the civilian use of the technology. This helped attract attention to nuclear safety and resulted in the creation of an international forum of discussion for nuclear governance at the heart of the Cold War. Bringing technical members of different governments to discuss civilian benefits helped ensure productive diplomatic dialogues to mitigate the civilian and military risks.

Looking at history could help us to find innovative solutions. Now is not the time to go into hibernation but to find a way back towards productive international cooperation on sound AI governance.