The National Institute of Standards and Technology (NIST) is a U.S. federal agency that promotes innovation and industrial competitiveness by advancing measurement science, standards, and technology. Established in 1901, NIST is part of the U.S. Department of Commerce and is a key resource for businesses, academia, and government agencies.
NIST provides comprehensive guidelines for managing the risks associated with Artificial Intelligence (AI) through the AI Risk Management Framework (AI RMF) and its companion document, the AI RMF Playbook. These guidelines aim to help organizations develop and deploy trustworthy AI systems while effectively managing potential risks. The framework and playbook outline detailed, actionable steps organized around four main functions: Govern, Map, Measure, and Manage.
Govern
Legal and Regulatory Compliance:
-
- Organizations must understand and manage AI-specific legal and regulatory requirements to ensure compliance and address potential legal risks such as discrimination and privacy violations.
Integration of Trustworthy AI Principles:
-
-
- Policies and practices should incorporate characteristics of trustworthy AI, including transparency, accountability, fairness, and reliability. This ensures AI systems are designed to prevent harm and build public trust.
-
Risk Management Process:
-
-
-
- Establish clear procedures for documenting and managing AI risks. This includes defining roles and responsibilities, ongoing monitoring, and periodic reviews to ensure the effectiveness of risk management activities.
-
-
AI System Inventory and Decommissioning:
-
-
-
-
- Maintain a detailed inventory of AI systems and implement safe decommissioning processes to prevent increased risks and ensure compliance with regulatory requirements.
-
-
-
Training and Accountability:
-
-
- Provide comprehensive AI risk management training and ensure clear roles and responsibilities communication. Assign executive leadership accountability for overseeing AI risks.
-
Diverse Team Involvement:
-
-
-
- Foster diversity within teams managing AI risks to incorporate varied perspectives and expertise, which enhances the organization’s capacity to anticipate and manage risks effectively.
-
-
Critical Thinking and Safety-First Mindset:
-
-
-
-
- Promote a culture of critical thinking and prioritize safety in designing, developing, deploying, and using AI systems. Encourage practices such as red-teaming and practical challenges to identify and mitigate risks.
-
-
-
Feedback and Incident Reporting:
-
-
-
-
-
- Establish mechanisms for collecting and integrating feedback from stakeholders and document and address incidents and limitations of AI systems. This enhances transparency and continuous improvementContinuous Improvement encourages small, incremental changes to the current process, avoiding the disruptions that larger changes can cause. This approach facilitates continuous improvement over time..
-
-
-
-
Third-Party Risk Management:
-
-
-
-
-
-
- Implement policies and procedures for managing risks associated with third-party AI systems and data and planning for contingencies to address these risks.
-
-
-
-
-
Manage
- System Purpose and Objectives:
- Regularly assess whether AI systems achieve their intended purposes and stated objectives. Make informed decisions about developing or deploying AI systems based on a thorough risk-benefit analysis.
- Prioritisation and Response:
- Prioritize AI risks based on their impact and likelihood. Develop, plan, and document appropriate responses, including mitigation, transfer, avoidance, or acceptance of risks.
Map and Measure
- These sections focus on identifying and understanding the context in which AI systems operate (Map) and evaluating and monitoring the performance and risks of AI systems (Measure). They provide the analytical foundation for informed decision-making and effective risk management.
Key Takeaways
- Flexibility and Adaptability: The NIST guidelines are designed to be adaptable across various sectors and use cases, providing a flexible framework that organizations can tailor to their specific needs.
- Voluntary Framework: While the guidelines are comprehensive, they are intended to be voluntary, allowing organizations to implement as many or as few recommendations as applicable to their industry and use case.
- Continuous Improvement: Emphasizing transparency, accountability, and continuous improvement, the guidelines encourage organizations to regularly review and update their AI risk management practices in response to evolving technologies and risks.
- Comprehensive Approach: By addressing governance, risk management, stakeholder engagement, and third-party risks, the NIST guidelines provide a holistic approach to managing AI risks and promoting trustworthy AI systems.