Skip to main content

NIST Releases Draft Artificial Intelligence Risk Management Framework; Seeks Public Comments

Client memorandum | April 18, 2022

On March 17, 2022, the National Institute of Standards and Technology (“NIST”) released an initial draft of its Artificial Intelligence Risk Management Framework (“AI RMF”) for addressing risks in the design, development, use and evaluation of AI products, services and systems. The framework was designed with two primary goals: (i) to catalyze consumer adoption through increasing trustworthiness of AI and (ii) to better manage risks associated with the use of AI. NIST intends the AI RMF for voluntary use, and aims to build upon and support existing AI risk management efforts, including standards issued by the IEEE and ISO/IEC SC42. NIST has requested feedback on the initial draft by April 29, 2022. NIST plans to publish a second version of the draft by the end of the July, and release version 1.0 of the AI RMF by January 2023. As part of collecting stakeholder feedback, NIST held an interactive workshop from March 29–31, attended by nearly 2,000 participants across various sectors and industries. In this memorandum, we will summarize the AI RMF and various suggestions for implementing the AI RMF discussed at the workshop.

Additional information

icon View File

This communication is for general information only. It is not intended, nor should it be relied upon, as legal advice. In some jurisdictions, this may be considered attorney advertising. Please refer to the firm’s data policy page for further information.