Tweet Artificial Intelligence

Managing AI Risks: How NIST’s AI Risk Management Framework Aligns With BSA’s AI Risk Management Framework

A new “crosswalk” analysis showcases the significant common ground between BSA’s 2021 Framework to Build Trust in AI and the new NIST RMF. Read More >>

The US government created an important new tool for companies to responsibly manage artificial intelligence technology when the National Institute of Standards and Technology (NIST) published its AI risk management framework (RMF) earlier this year.

A new “crosswalk” analysis showcases the significant common ground between BSA’s 2021 Framework to Build Trust in AI and the new NIST RMF.

The RMF is a major step toward promoting the responsible development and deployment of AI products and services and offers the most significant guidance to date by the US government regarding management of AI technologies.

Risk Management Frameworks, including BSA’s Framework and the NIST RMF, are important accountability tools because they ensure that organizations have the policies, processes, and personnel in place to identify and mitigate risks and can help guard against the unintended consequences of AI.

The NIST RMF creates a lifecycle approach for addressing AI risks. It also identifies characteristics of Trustworthy AI and acknowledges the importance of impact assessments to identify, understand and mitigate risks. Similarly, the BSA Framework addresses issues ranging from acquiring data for use in an AI system to preparation for deployment and use of that system and provides guidance on how to conduct impact assessments.

BSA spent more than a year developing our Framework, which is informed by a vast body of research and technical expertise. In light of BSA’s substantial work on these issues, we actively supported NIST’s development of the RMF.

The “crosswalk” analysis prepared by BSA shows how NIST’s RMF aligns with the BSA Framework. Key takeaways from the crosswalk include that both frameworks recognize the importance of:

  • Consultation with a diverse group of stakeholders;
  • Establishing processes to identify, assess, and mitigate risks;
  • Defining individual roles and responsibilities to people throughout an organization;
  • Identifying metrics for evaluation;
  • Evaluating fairness and bias;
  • Maintaining post-deployment feedback mechanisms; and
  • Establishing detailed plans for responding to incidents.

BSA’s framework additionally dives into detail when it comes to identifying and mitigating bias, and offers additional recommendations on issues such as data collection and examination.

Read the full “crosswalk” analysis here and contact BSA Director of Policy Shaundra Watson for more details.

Author:

Shaundra Watson serves as Senior Director, Policy, in Washington, DC, and is responsible for providing counsel and developing global policy on key issues for the software industry, with an emphasis on artificial intelligence.  In a previous BSA role, Watson also led BSA's engagement on global privacy issues.

Watson has spearheaded BSA’s contributions to key dialogues with US and global policymakers, including through written comments on AI and privacy regulatory proposals; thoughtful contributions on best practices on AI governance; and as an expert speaker in key policy engagements, including the US Federal Trade Commission (FTC) hearings examining privacy approaches, a forum in India with policymakers on development of India’s privacy law, and a briefing on AI for Members of Congress.

Watson rejoined BSA after serving as a corporate in-house senior privacy and information security counsel for a Fortune 500 global entertainment company, where she advised business and technology units on CCPA and GDPR implementation and led development of global privacy compliance strategies.  

Prior to joining BSA, Watson served as an Attorney-Advisor in the Office of Chairwoman Edith Ramirez at the FTC in Washington, DC, where she advised Chairwoman Ramirez on privacy, data security, and international issues and evaluated companies’ legal compliance in over 50 enforcement actions. During her FTC tenure, which spanned more than a decade, Watson also served as an Attorney-Advisor in the Office of Commissioner Julie Brill, Counsel for International Consumer Protection in the Office of International Affairs, and an attorney in the Divisions of Privacy and Identity Protection and Marketing Practices.

In her various FTC positions, Watson played a key role on notable privacy, security, and consumer protection initiatives, including negotiating and/or implementing flagship programs advancing global data transfers, such as the  EU-U.S. Privacy Shield and APEC Cross-Border Privacy Rules, serving on the global expert committee conducting a review of the OECD’s seminal privacy guidelines, and contributing to influential policy reports -- by both the FTC and multilateral fora -- shaping responsible data practices in the context of emerging technologies. In recognition of her leadership on Internet policy and global domain name issues, Watson received the FTC's prestigious Paul Rand Dixon award. 

Prior to joining the FTC, Watson was an Associate at Hogan & Hartson, LLP (now Hogan Lovells) in Washington, DC, where she handled commercial litigation, international trade, and intellectual property matters.  

Watson has been active in advancing dialogues on privacy and AI, formerly serving on IAPP’s Education Advisory Board and the ABA’s big data task force, and as a current member of the Atlantic Council’s Transatlantic Task Force on AI and the National Bar Association’s privacy, security, and technology law section. Watson has also been a law school guest lecturer on international privacy. 

Watson clerked for Justice Peggy Quince at the Supreme Court of Florida and is a graduate of the University of Virginia School of Law in Charlottesville, VA.

Leave a Reply

Your email address will not be published. Required fields are marked *

4 × 1 =