Tweet Artificial Intelligence, Procurement

Momentum Builds in Congress for Using NIST AI RMF in Procurement to Set Rules for Responsible AI

As policymakers grapple with how best to promote the responsible development and use of AI, the RMF published by NIST is emerging as an important tool to manage AI risks. Read More >>

As policymakers grapple with how best to promote the responsible development and use of artificial intelligence (AI), the Risk Management Framework (RMF) published by the National Institute of Standards and Technology (NIST) is emerging as an important tool to manage AI risks.

The NIST RMF reflects broad input from experts about how best to govern, map, measure, and manage AI risks. Since its release in January, the NIST RMF has emerged as a valuable roadmap for companies to responsibly develop and deploy AI products and services.

In a Capitol Hill briefing hosted this week by BSA | The Software Alliance, enterprise software experts discussed how companies can use the NIST RMF to manage AI risks. BSA members are at the forefront of developing responsible AI, and the company representatives highlighted the innovative ways in which their AI systems are providing significant societal benefits. They also recognized that organizations must account for the unique opportunities and risks that AI presents, emphasized the importance of risk management programs in AI development, and described the NIST RMF as a foundational guide for identifying and mitigating AI risks.

Now, there is growing momentum among lawmakers in both parties to further promote the adoption of the NIST RMF.  For example, Sen. Jerry Moran (R-KS) recently proposed an amendment to the National Defense Authorization Act that would require implementation of the NIST RMF for federal agencies procuring and using AI in certain high-risk use cases. And, in a letter to the Biden administration last week, Reps. Ted Lieu (D-CA), Zoe Lofgren (D-CA), and Haley Stevens (D-MI) similarly urged the Office of Management and Budget to require federal agencies and vendors to adopt the NIST RMF.

BSA | The Software Alliance supports these efforts. Incorporating the NIST RMF as part of procurement for high-risk uses would help establish the US government as a market leader on responsible AI, embracing best practices for managing AI risks, and holding federal contractors to the same standard. Applying the NIST RMF in the procurement context, as well as other contexts, can help companies large and small measure and manage AI risks.

The NIST RMF provides the tools to enable a broad swath of organizations across varied industries to adopt responsible practices that apply in a wide array of contexts. Many AI systems will be deployed for low-risk uses, like helping individuals locate recently-viewed electronic files or filtering out background noise on a video call. These sorts of low-risk uses can create benefits for users but do not require further accountability measures. But for high-risk uses, like decisions to deny someone housing, employment, credit, education, healthcare, insurance, or access to physical places of public accommodation, the NIST RMF provides an important tool for mitigating AI risks.

Similar to BSA’s 2021 Framework to Build Trust in AI, the NIST RMF outlines practices that organizations can implement to address bias issues in AI, with significant alignment between the two. As BSA’s crosswalk illustrates, both frameworks encourage:

    • Consultation with a diverse group of stakeholders;
    • Establishing processes to identify, assess, and mitigate risks;
    • Defining individual roles and responsibilities to people throughout an organization;
    • Identifying metrics for evaluation;
    • Evaluating fairness and bias;
    • Maintaining post-deployment feedback mechanisms; and
    • Establishing detailed plans for responding to incidents.

As organizations seek to leverage the RMF, NIST has also provided an accompanying playbook that provides more detailed guidance on how organizations can operationalize the functions in the RMF.  With this guidance, organizations should be well-equipped to govern, map, measure, and manage AI risks.  Whether through procurement requirements or voluntary adoption in commercial contexts, the NIST RMF is a critical tool for ensuring responsible AI innovation.

Author:

Shaundra Watson serves as Senior Director, Policy, in Washington, DC and is responsible for providing counsel and developing policy on key issues for the software industry, with an emphasis on privacy and artificial intelligence.

Prior to joining BSA, Watson served as an Attorney-Advisor in the Office of Chairwoman Edith Ramirez at the US Federal Trade Commission (FTC) in Washington, DC, where she advised Chairwoman Ramirez on privacy, data security, and international issues and evaluated companies’ compliance with privacy and data security laws in numerous enforcement actions. During her FTC tenure, which spanned more than a decade, Watson also served as an Attorney-Advisor in the Office of Commissioner Julie Brill, Counsel in the Office of International Affairs, and attorney in the Divisions of Privacy and Identity Protection and Marketing Practices.

In her various positions, Watson played a key role on notable privacy and security initiatives, including the negotiation of the EU-U.S. Privacy Shield; implementation of the APEC Cross-Border Privacy Rules; and policy development on the Internet of Things, big data, and expansion of the global domain name system. In recognition of her leadership on Internet policy issues, Watson received the agency’s Paul Rand Dixon award. Prior to joining the FTC, Watson was an Associate at Hogan & Hartson, LLP (now Hogan Lovells) in Washington, DC and clerked for Justice Peggy Quince at the Supreme Court of Florida.

Watson holds a privacy certification from the International Association of Privacy Professionals and serves on the organization’s Education Advisory Board. Watson is a graduate of the University of Virginia School of Law in Charlottesville, Virginia.

Leave a Reply

Your email address will not be published. Required fields are marked *

two × three =