Tweet Artificial Intelligence, Procurement

Momentum Builds in Congress for Using NIST AI RMF in Procurement to Set Rules for Responsible AI

As policymakers grapple with how best to promote the responsible development and use of AI, the RMF published by NIST is emerging as an important tool to manage AI risks. Read More >>

As policymakers grapple with how best to promote the responsible development and use of artificial intelligence (AI), the Risk Management Framework (RMF) published by the National Institute of Standards and Technology (NIST) is emerging as an important tool to manage AI risks.

The NIST RMF reflects broad input from experts about how best to govern, map, measure, and manage AI risks. Since its release in January, the NIST RMF has emerged as a valuable roadmap for companies to responsibly develop and deploy AI products and services.

In a Capitol Hill briefing hosted this week by BSA | The Software Alliance, enterprise software experts discussed how companies can use the NIST RMF to manage AI risks. BSA members are at the forefront of developing responsible AI, and the company representatives highlighted the innovative ways in which their AI systems are providing significant societal benefits. They also recognized that organizations must account for the unique opportunities and risks that AI presents, emphasized the importance of risk management programs in AI development, and described the NIST RMF as a foundational guide for identifying and mitigating AI risks.

Now, there is growing momentum among lawmakers in both parties to further promote the adoption of the NIST RMF.  For example, Sen. Jerry Moran (R-KS) recently proposed an amendment to the National Defense Authorization Act that would require implementation of the NIST RMF for federal agencies procuring and using AI in certain high-risk use cases. And, in a letter to the Biden administration last week, Reps. Ted Lieu (D-CA), Zoe Lofgren (D-CA), and Haley Stevens (D-MI) similarly urged the Office of Management and Budget to require federal agencies and vendors to adopt the NIST RMF.

BSA | The Software Alliance supports these efforts. Incorporating the NIST RMF as part of procurement for high-risk uses would help establish the US government as a market leader on responsible AI, embracing best practices for managing AI risks, and holding federal contractors to the same standard. Applying the NIST RMF in the procurement context, as well as other contexts, can help companies large and small measure and manage AI risks.

The NIST RMF provides the tools to enable a broad swath of organizations across varied industries to adopt responsible practices that apply in a wide array of contexts. Many AI systems will be deployed for low-risk uses, like helping individuals locate recently-viewed electronic files or filtering out background noise on a video call. These sorts of low-risk uses can create benefits for users but do not require further accountability measures. But for high-risk uses, like decisions to deny someone housing, employment, credit, education, healthcare, insurance, or access to physical places of public accommodation, the NIST RMF provides an important tool for mitigating AI risks.

Similar to BSA’s 2021 Framework to Build Trust in AI, the NIST RMF outlines practices that organizations can implement to address bias issues in AI, with significant alignment between the two. As BSA’s crosswalk illustrates, both frameworks encourage:

    • Consultation with a diverse group of stakeholders;
    • Establishing processes to identify, assess, and mitigate risks;
    • Defining individual roles and responsibilities to people throughout an organization;
    • Identifying metrics for evaluation;
    • Evaluating fairness and bias;
    • Maintaining post-deployment feedback mechanisms; and
    • Establishing detailed plans for responding to incidents.

As organizations seek to leverage the RMF, NIST has also provided an accompanying playbook that provides more detailed guidance on how organizations can operationalize the functions in the RMF.  With this guidance, organizations should be well-equipped to govern, map, measure, and manage AI risks.  Whether through procurement requirements or voluntary adoption in commercial contexts, the NIST RMF is a critical tool for ensuring responsible AI innovation.

Author:

Shaundra Watson serves as Senior Director, Policy, in Washington, DC, and is responsible for providing counsel and developing global policy on key issues for the software industry, with an emphasis on artificial intelligence.  In a previous BSA role, Watson also led BSA's engagement on global privacy issues.

Watson has spearheaded BSA’s contributions to key dialogues with US and global policymakers, including through written comments on AI and privacy regulatory proposals; thoughtful contributions on best practices on AI governance; and as an expert speaker in key policy engagements, including the US Federal Trade Commission (FTC) hearings examining privacy approaches, a forum in India with policymakers on development of India’s privacy law, and a briefing on AI for Members of Congress.

Watson rejoined BSA after serving as a corporate in-house senior privacy and information security counsel for a Fortune 500 global entertainment company, where she advised business and technology units on CCPA and GDPR implementation and led development of global privacy compliance strategies.  

Prior to joining BSA, Watson served as an Attorney-Advisor in the Office of Chairwoman Edith Ramirez at the FTC in Washington, DC, where she advised Chairwoman Ramirez on privacy, data security, and international issues and evaluated companies’ legal compliance in over 50 enforcement actions. During her FTC tenure, which spanned more than a decade, Watson also served as an Attorney-Advisor in the Office of Commissioner Julie Brill, Counsel for International Consumer Protection in the Office of International Affairs, and an attorney in the Divisions of Privacy and Identity Protection and Marketing Practices.

In her various FTC positions, Watson played a key role on notable privacy, security, and consumer protection initiatives, including negotiating and/or implementing flagship programs advancing global data transfers, such as the  EU-U.S. Privacy Shield and APEC Cross-Border Privacy Rules, serving on the global expert committee conducting a review of the OECD’s seminal privacy guidelines, and contributing to influential policy reports -- by both the FTC and multilateral fora -- shaping responsible data practices in the context of emerging technologies. In recognition of her leadership on Internet policy and global domain name issues, Watson received the FTC's prestigious Paul Rand Dixon award. 

Prior to joining the FTC, Watson was an Associate at Hogan & Hartson, LLP (now Hogan Lovells) in Washington, DC, where she handled commercial litigation, international trade, and intellectual property matters.  

Watson has been active in advancing dialogues on privacy and AI, formerly serving on IAPP’s Education Advisory Board and the ABA’s big data task force, and as a current member of the Atlantic Council’s Transatlantic Task Force on AI and the National Bar Association’s privacy, security, and technology law section. Watson has also been a law school guest lecturer on international privacy. 

Watson clerked for Justice Peggy Quince at the Supreme Court of Florida and is a graduate of the University of Virginia School of Law in Charlottesville, VA.

Leave a Reply

Your email address will not be published. Required fields are marked *

ten + one =