U.S. Artificial Intelligence Safety Institute
AI is one of the defining technologies of our era. Its emergence, together with its multiplying contexts of use and increasing capabilities, presents enormous opportunities as well as significant present and future harms. To enable a future in which we realize AI’s full potential to benefit humanity and the planet, it is crucial that we encourage innovation in this transformative technology while mitigating its risks. One of the key challenges in achieving and sustaining safe AI innovation is a lack of scientific study of AI safety. A reliable, reproducible science of AI safety is urgently needed to accurately evaluate the capabilities and risks of models and systems and assess the effectiveness of mitigations and safeguards.
The U.S. AI Safety Institute, housed within the National Institute of Standards and Technology (NIST), is advancing the science, practice, and adoption of AI safety across the spectrum of risks, including those to national security, public safety, and individual rights. Our efforts will initially focus on the priorities assigned to NIST under President Biden’s Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Safety Institute will pursue a range of projects, each dedicated to a specific challenge that is key to our mission; these will initially include advancing research and measurement science for AI safety, conducting safety evaluations of models and systems, and developing guidelines for evaluations and risk mitigations, including content authentication and the detection of synthetic content. As the technology and world changes, additional projects will likely be necessary.
Throughout its work, the Safety Institute will draw on NIST’s time-tested scientifically grounded and democratically inclusive processes to facilitate the development of trusted standards around new technologies. It will collaborate closely with diverse communities to gain a firm understanding of the technology’s current and emerging capabilities, limitations, and real-world impacts and to foster networks, institutions, and norms around AI safety.
On April 16, 2024, US Secretary of Commerce Gina Raimondo announced additional members of the AISI executive leadership team. Read the announcement.
Director, U.S. AI Safety Institute
Credit:
Courtesy Elizabeth Kelly
Elizabeth Kelly is director of the U.S. Artificial Intelligence Safety Institute. As director, she is responsible for providing executive leadership, management, and oversight of the AI Safety Institute and coordinating with other AI policy and technical initiatives throughout the Department of Commerce, NIST and across the government.