Ethical AI and Librarianship

A Resource Guide

AI Risk Management Framework

Field Description
Title AI Risk Management Framework
Type Guidelines & Policies
Creator NIST (National Institute of Standards and Technology)
Link https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
Creation Date 01/23/2024
Last Updated Date --
Summary Created by the U.S. National Institute of Standards and Technology (NIST), the AI Risk Management Framework (AI RMF) is a guideline designed to help organizations design, develop, deploy, or use AI systems responsibly and manage associated risks. These risks include shifting training data that may affect system functionality and trustworthiness, as well as complex deployment contexts that make AI system failures difficult to detect and mitigate. The framework is structured in two main parts: 1) Part 1 focuses on framing AI risks, identifying the intended audience of the Framework, and outlining the characteristics of trustworthy AI. The characteristics of trustworthy AI systems defined by the framework include systems that are valid, reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias mitigated. 2) Part 2 presents four core functions for managing AI risks across the AI lifecycle: Govern, Map, Measure, and Manage. Each function is further detailed into categories and subcategories. The framework was developed over 18 months with input from over 240 organizations across academia, industry, civil society, and government.
Topic AI Governance, AI risk, Ethical AI, Multi-sector
Source and Link NIST (National Institute of Standards and Technology). https://airc.nist.gov/airmf-resources/airmf/
Access Open
Accessibility Open
Audience Librarians – general. Information professionals. Scholars and Students.
Platform or Format Document (.pdf)
Length 48 pages
Geography USA
Language ENG
Description Date 06/01/2025

Ethical AI and Librarianship: A Resource Guide