US Government Releases Artificial Intelligence Governance Framework
Client memorandum | January 31, 2023
Authors: Amir R. Ghavi, Katelyn E. James, and Cecily D’Amore
On January 26, 2023, the US government, through the National Institute of Standards and Technology (“NIST”)[1], released an artificial intelligence (“AI”) governance framework, titled the Artificial Intelligence Risk Management Framework (“AI RMF”) 1.0. The AI RMF is the result of a multiyear process involving workshops and industry participation and feedback, with the goal of mitigating risks in the design, development, use and evaluation of AI products, services and systems. The AI RMF was designed with two primary goals: (i) to help increase trustworthiness of AI and (ii) to manage risks associated with the development and use of AI. NIST intends to finalize its draft guidebook to the AI RMF, called the AI RMF Playbook, in the Spring of 2023.[2]
The AI RMF 1.0 is divided into two parts: (I) Foundational Information and (II) Core and Profiles. Part I addresses how organizations should consider framing risks related to their AI systems, including:
- Understanding and addressing the risk, impact and harm that may be associated with AI systems.
- Addressing the challenges for AI risk management, including those related to third-party software, hardware and data.
- Incorporating a broad set of perspectives across the AI life cycle.
Part I also describes trustworthy AI systems, including characteristics such as validity and reliability, safety, security and resilience, accountability and transparency, explainability and interpretability, privacy-enhanced, and fairness with harmful bias managed.
Part II describes features to address risks associated with the use and deployment of AI systems. These features include:
- Governance: a culture of risk management;
- Mapping: context is recognized and risks identified;
- Measurement: identified risks are assessed, analyzed or tracked; and
- Management: risks are prioritized and acted upon based on a projected impact.
The AI RMF is not legally binding or required for AI development and deployment, but will likely become a de facto standard for AI governance.[3] We have seen in 2022 the awesome power of AI to extend human capability, creativity and insights. But in the absence of governance, AI systems[4] (like all disruptive technologies) pose potential risks. Researchers have highlighted latent risks in raw data sets used to train the AI systems and unintended consequences in the use and operation of AI systems.[5] The AI RMF is intended to address risks and equip AI actors[6] to manage such risks in a responsible way to enhance trustworthiness and ultimately cultivate public trust in the AI systems. As AI continues to evolve, NIST intends for the AI RMF to evolve with it to reflect new knowledge, awareness and practices. In this memorandum, we will summarize the AI RMF for organizations who currently use or plan to use AI in the future.
I. Foundational Information: AI Risks and Trustworthiness
The AI RMF states that trustworthy systems must be responsive to a variety of criteria. As trustworthiness is inextricably connected to social and organizational behavior, the AI RMF recommends that humans guide the specific metrics related to AI trustworthiness. However, the AI RMF acknowledges that a comprehensive approach to risk management must recognize tradeoffs. The AI RMF takes the position that a trustworthy AI system is: (a) valid and reliable, (b) safe, (c) secure and resilient, (d) accountable and transparent, (e) explainable and interpretable, (f) privacy-enhanced, and (g) fair with harmful bias managed.
Source: NIST, Artificial Intelligence Risk Management Framework (AI RMF 1.0)
(a) Valid and Reliable: The AI RMF purports that the measurement of validity, accuracy, robustness and reliability (detailed below) contribute to trustworthiness. It suggests ongoing testing or monitoring to confirm that a system performs as intended, prioritizing minimization of potential negative impacts, and employing human intervention as necessary where the AI system cannot detect or correct errors.
- Validation: The confirmation that requirements for an intended use or application have been fulfilled through objective evidence can decrease negative AI risks and increase trustworthiness.
- Accuracy: Measures of accuracy should consider false positive and false negative rates and human-AI teaming, and demonstrate validity that goes beyond the training conditions. They should be clearly defined and documented, and include details about test methodology that are representative of conditions of expected use.
- Robustness or generalizability: Being able to maintain an AI system’s level of performance under a variety of circumstances, including those not initially anticipated, can minimize potential harms.
- Reliability: A goal of overall correctness of AI system operation under the conditions of expected use and over a given period of time, including the lifetime of the system, can contribute to an AI system’s trustworthiness.
(b) Safe: The AI RMF encourages safe operation of AI systems through tailored AI risk management based on the context and severity of potential risks presented. It states that those approaches should start early in the AI system’s life cycle and should allow for the ability to shut down, modify or have human intervention incorporated into systems that deviate from intended or expected functionality. Additionally, the AI RMF contends that safe AI systems are improved through responsible design, development and deployment; clear information to deploys on responsible uses of the system; responsible decision-making by deployers and end users; and explanations and documentation of risk based on empirical evidence of incidents.
(c) Secure and Resilient: The AI RMF considers security and resilience to be related but distinct characteristics. It suggests that resilience requires the maintenance of normal functionality in the face of adverse or unexpected events or changes in their environment, while security includes the protocols to avoid, protect against, respond to or recover from attacks. It contends that AI systems may be secure if they can maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorized access and use.
(d) Accountable and Transparent: The AI RMF takes the position that trustworthy AI depends on accountability, and accountability presupposes transparency. It states that transparency should involve tailoring how access to information is provided based on the stage of the AI life cycle and the role or knowledge of AI actors or those interacting with or using the AI system, and that this information may include design decisions, training data, model structure, the model’s intended use cases, and how and when deployment, post-deployment, or end-use decisions were made and by whom. The AI RMF recommends developers test different types of transparency tools to ensure that AI systems are used as intended.
(e) Explainable and Interpretable: In the AI RMF, explainabilty refers to a representation of the mechanisms underlying AI systems’ operation, whereas interpretability refers to the meaning of AI systems’ output in the context of their designed functional purposes. The AI RMF suggests that this information can help end users understand the purposes and potential impact of an AI system, and that risks should be managed by tailoring descriptions to individual differences and by communicating why the AI system made certain predictions or recommendations.
(f) Privacy-Enhanced: The AI RMF takes the position that privacy helps safeguard human autonomy, identity and dignity, and that AI system decisions should be guided by privacy values such as anonymity, confidentiality and control. The AI RMF states that privacy-related risks may influence security, bias and transparency, and that AI systems may introduce new risks to privacy, including inferences that may identify individuals or information that was previously private.
(g) Fair – with Harmful Bias Managed: The AI RMF reflects the perspective that bias goes beyond data representativeness and demographic balance, and is tightly associated with fairness in society. Although perceptions of fairness differ and may shift depending on application, the AI RMF contends that fairness in AI is rooted in concerns for equality and equity. It suggests that organizations’ risk management efforts should recognize and consider the differences that may impact AI systems. The AI RMF identifies several categories of AI bias to be considered and managed: systemic, computational and statistical, and human-cognitive.
- Systemic Bias. Systemic bias can impact AI systems at various levels, ranging from an AI dataset to the broader society that uses AI systems.
- Computational and Statistical Bias. This form of bias may be present in AI datasets and algorithmic processes, stemming from systematic errors from non-representative samples.
- Human-Cognitive Bias. How individuals perceive AI system information and make decisions, as well as how they think about purposes and functions of an AI system, may impact many stages of the AI life cycle, including its design, implementation, operation and maintenance.
II. Core and Profiles
AI RMF Core
The AI RMF Core suggests the outcomes and actions that are meant to enable dialogue, understanding and activities necessary to manage AI risks. The Core is comprised of three elements: (i) functions, (ii) categories and (iii) subcategories. The four functions organize AI risk management activities at their highest level to govern, map, measure and manage AI risks. The categories and subcategories subdivide the function into specific outcomes and actions. The AI RMF Core functions should be implemented to reflect diverse and multidisciplinary perspectives, and may be applied differently among different organizations to manage risk based on resources and capabilities. However, we note that while NIST advocates that the AI RMF Core should include views from outside of the organization, this may be impractical for organizations trying to maintain trade secrets or the confidentiality of their products and services.
Source: NIST, Artificial Intelligence Risk Management Framework (AI RMF 1.0)
Govern: The Govern function is designed to cultivate a culture of risk management within organizations. The AI RMF establishes that governance focuses on both technical aspects of AI system design and development and on organizational practices and competencies that directly affect the individuals involved in training, deploying and monitoring such systems.
The AI RMF suggests the Govern function is cross-cutting, and enables the other functions of the AI risk management process, especially influencing those related to compliance or evaluation. It takes the position that governance is a continual and intrinsic requirement for effective AI risk management over an AI system’s lifespan. The AI RMF states that governance provides a structure through which AI risk management functions can better align with organizational policies and strategic priorities, including those that do not directly relate to AI systems. Practices related to governing AI risks are described in the NIST AI RMF Playbook.
The March 2022 workshop hosted by NIST highlighted the potential harms of poorly governed AI development and deployment in high-stakes areas such as banking, transportation, criminal justice and employment.
NIST has provided example categories and subcategories to incorporate the Govern function:
Category |
Subcategory |
Policies, processes, procedures and practices across the organization related to the mapping, measuring and managing of AI risks are in place, transparent and implemented effectively. |
|
Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible and trained for mapping, measuring and managing AI risks. |
|
Workforce diversity, equity, inclusion and accessibility processes are prioritized in the mapping, measuring and managing of AI risks throughout the life cycle. |
|
Organizational teams are committed to a culture that considers and communicates AI risk. |
|
Processes are in place for robust engagement with relevant AI actors. |
|
Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues. |
|
Map: The Map function establishes context to frame risks related to an AI system. Outcomes in the Map function inform both the Measure and Manage functions. The AI RMF acknowledges that the diversity of actors and activities at various stages of an AI system’s life cycle can make it difficult to anticipate impacts of AI systems, and it is likely that AI actors in charge of one part of an AI system may not have full visibility or control into another part of the AI system. Further, the AI RMF suggests that information gathered while carrying out this function can inform decisions about model management, including an initial decision about whether an AI solution is necessary. The AI RMF encourages incorporating both perspectives from a diverse internal team, and also engagement with those external to the team that developed or deployed the AI system, and articulates that such perspectives are critical in implementing this function, as they help organizations proactively prevent risks and, in turn, develop more trustworthy AI systems.
The AI RMF recommends that the implementation of the Map function incorporate perspectives from internal and external teams that have developed or deployed the AI system, as well as external collaborators, end users and potentially impacted communications. The AI RMF states that such perspectives may help organizations prevent negative risk and develop more trustworthy AI systems by improving their ability to understand context, identify the limitations of AI processes, and anticipate risk of the use of AI beyond the intended use.
NIST provided examples of the categories and the subcategories for the Mapping function:
Category |
Subcategory |
Context is established and understood. |
|
Classification of the AI system is performed. |
|
AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks are understood. |
|
Risks and benefits are mapped for all components of the AI system, including third-party software and data. |
|
Impacts to individuals, groups, communities, organizations and society are characterized. |
|
Measure: The Measure function helps organizations determine the knowledge relevant to AI risks, including tracking metrics for the aforementioned trustworthy characteristics, social impact and human-AI configurations. Under the AI RMF, the measure function includes quantitative, qualitative, or mixed-method assessment and analysis to monitor AI risks and their impacts. The AI RMF suggests that the methodologies should adhere to scientific, legal and ethical norms, and are to be carried out in an open and transparent process. The AI RMF establishes that measuring AI risks includes tracking metrics for the trustworthy characteristics of AI systems, the social impact of such systems and human-AI configurations. It states that measurement can provide a traceable basis to inform management decisions when tradeoffs among the trustworthy characteristics arise. Upon completion of this function, the AI RMF takes the position that objective, repeatable or scalable TEVV processes will be in place, followed and documented. Practices related to measuring AI risks are described in the NIST AI RMF Playbook.
NIST has provided sample categories and subcategories for the Measure function:
Category |
Subcategory |
Appropriate methods and metrics are identified and applied. |
|
AI systems are evaluated for trustworthy characteristics. |
|
Mechanisms for tracking identified AI risks over time are in place. |
|
Feedback about efficacy of measurement is gathered and assessed. |
|
Manage: The Manage function ties together all four functions by allocating risk management resources on a regular basis as defined by the Govern function. According to the AI RMF, it addresses the risks that have been mapped and measured in order to maximize the benefits of AI systems and minimize any adverse impacts. It states that contextual information previously gathered and already-established systemic documentation practices are also utilized in this function to bolster risk management efforts. The AI RMF urges Framework users to continue to apply the Manage function to deployed AI systems as methods, contexts, risks, needs and expectations all evolve over time. Practices related to managing AI risks are described in the NIST AI RMF Playbook.
NIST has provided sample categories and subcategories for prioritizing the Manage function:
Category |
Subcategory |
AI risks based on assessments and other analytical output from the Map and Measure functions are prioritized, responded to and managed. |
|
Strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, documented and informed by input from relevant AI actors. |
|
AI risks and benefits from third-party entities are managed. |
|
Risk treatments, including response and recovery, and communication plans for the identified and measured AI risks are documented and monitored regularly. |
|
AI RMF Profiles
The AI RMF has established domain-tailored profiles for the purpose of implementing the AI RMF functions. The AI RMF use-case profiles, such as a hiring profile or fair housing profile, are implementations of AI RMF functions, categories and subcategories for a specific setting or application. These profiles may illustrate how risk can be managed at a certain stage of the AI life cycle or in a specific sector, technology or end-use application. AI RMF temporal profiles describe either the current or the desired target state of specific AI risk management activities in a given sector, industry, organization or application context. An AI RMF current profile indicates how AI is currently being managed and the related risks in terms of current outcomes, whereas a target profile indicates the outcomes needed to achieve the desired or target AI risk management goals, and comparing the two can reveal gaps to address. AI RMF cross-sectoral profiles cover risks of models or applications that can be used across use cases or sectors. To achieve greater flexibility, the AI RMF does not prescribe profile templates.
[1] NIST, a part of the Department of Commerce, promotes US innovation and industrial competitiveness by advancing measurement science, standards and technology in ways that enhance economic security and improve quality of life.
[2] The AI RMF Playbook suggests ways to use and develop the AI RMF 1.0. The playbook is still in draft form and NIST is seeking public comments until February 27, 2023. Feedback can be sent to AIframework@nist.gov and the draft AI RMF Playbook can be accessed here.
[3] In 2014, NIST released the Cybersecurity Framework, a voluntary framework that establishes comprehensive cybersecurity and information security practices. Absent federal standard or regulations, the Cybersecurity Framework has become the de facto standard for commercially reasonable cybersecurity practices. Absent federal standard or regulations, the Cybersecurity Framework has become the de facto standard for what is considered commercially reasonable cybersecurity practices. The Cybersecurity Framework has since been adopted by federal agencies and governments, as well as private entities and organizations. The Cybersecurity Framework is also recognized internationally and is available in ten languages.
[4] The AI RMF refers to an “AI system” as an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments that are designed to operate with varying levels of autonomy.
[5] For example, the Federal Trade Commission (“FTC”) was tasked by Congress to complete a study on how AI can be used to address online harms. In their June 2022 report to Congress, Combatting Online Harms Through Innovation, the FTC addresses how datasets that support AI tools are often “not robust or accurate enough to avoid false positives or false negatives.” Additionally, in 2020, the FTC released guidance on its blog regarding the commercial use of AI and algorithms, titled “Using Artificial Intelligence and Algorithms.” The guidance outlined several recommendations for businesses to take when deploying AI into the market organized around four key values: (i) transparency, (ii) fairness, (iii) accuracy and (iv) accountability. In 2021, the FTC offered additional insight regarding the use of AI, in a second piece of guidance, titled “Aiming for truth, fairness, and equity in your company’s use of AI,” which focused on examples when AI practices can be deceptive or unfair, building upon its recommendations from 2020.
[6] The AI RMF refers to “AI actors” as individuals and organizations who deploy or operate AI.
This communication is for general information only. It is not intended, nor should it be relied upon, as legal advice. In some jurisdictions, this may be considered attorney advertising. Please refer to the firm’s data policy page for further information.