Many AI scientists and other notable figures now believe that AI poses a risk of extinction on par with pandemics and nuclear war. A sufficiently powerful AI could self-replicate beyond developers’ control. Less powerful AI could also be misused. Given these risks, it is crucial that AI research be held to high standards of caution, trustworthiness, security, and oversight.
To determine what AI research standards should be and how they should be implemented, it may be helpful to consider precedents from other fields conducting dangerous research.
This memo outlines select standards in biosafety, with a focus on how high-risk biological agents are treated in biosafety level (BSL) 3 and 4 labs in the United States. It then considers how similar standards could be applied to high-risk AI research. The covered topics include:
Read the complete document here: