Tweet Artificial Intelligence

AI Standards Development: Why It Matters

Artificial intelligence has not just transformed businesses in every industry sector, provided new ways to solve intractable societal challenges, and created popular new tools to generate text and images. It has also ignited interest in an issue that usually doesn’t get much attention: global standards. Read More >>

Artificial intelligence (AI) has already transformed businesses in every industry, provided new ways to solve intractable societal challenges, and created popular new tools to generate text and images. Importantly, it has also ignited interest in an issue that usually doesn’t get much attention: Global standards.

AI standards development has become a popular topic for at least two reasons:

  • Standards can establish benchmarks for responsible AI to help manage AI risks; and
  • Standards can enable effective evaluation of AI systems.

Although policymakers are increasingly interested in the role of AI standards, legislative proposals often miss the mark in two key ways. First, policies sometimes require individual government agencies to create new AI standards, which fails to recognize that these are already developed by an industry-led, consensus-based process within international standards organizations. Second, some proposals require an external audit of AI systems, failing to recognize that such audits must be based on a robust set of standards. (As we explain here, the AI auditing ecosystem is not currently mature enough to support such audits.)

Who Should Create AI Standards?

One important question raised by recent legislative proposals is: Who should create AI standards? The US government has a long-standing policy of supporting private sector-led, consensus-driven, multi-stakeholder, voluntary standards development processes, and it continues to have an important role to play.

In fact, the US government often contributes to international standards development as one of many stakeholders in international standards organizations. In this capacity, it can provide foundational, pre-standardization research and use strategic global partnerships to promote global standards that are ready for adoption. For example, the US government was involved in creating important standards on two distinct issues that greatly impact our everyday lives: Child car seat restraints and enabling devices to better communicate via Wi-Fi.

This issue is so important that the National Institute for Standards and Technology (NIST) has developed an entire plan for global engagement on AI standards. This plan recognizes the many benefits of the US government’s long-standing approach to standards development, including:

  • Facilitating globally interoperable standards by avoiding country-specific standards that create fragmentation and conflicting requirements;
  • Leveraging industry’s technical expertise to inform key issues and ensuring standards can work by the people who are actually implementing them;
  • Better utilizing government resources, instead of developing duplicative standards that reinvent the wheel; and
  • Reducing market friction in global technologies to facilitate more seamless adoption.

NIST has particularly emphasized the importance of interoperable standards. Its global engagement plan recognizes that if different jurisdictions “can standardize on the same building blocks, then even if regulatory environments are not fully aligned, it is at least easier for market participants to move smoothly between markets.” For this reason, “[g]lobal cooperation and coordination on AI standards will be critical for defining a consistent or at least interoperable set of ‘rules of the road.’”

This issue is too important to take a shortcut. Rather than directing an agency to create standards on its own, the United States should maintain its approach of supporting industry-led, multi-stakeholder standards development through international standards organizations. Policymakers who want to spur the creation of AI standards can take other actions to help, as discussed below.

How AI Standards Can Help Evaluate AI Systems

Another concern raised about AI policy proposals is that they don’t rely on standards enough and instead require evaluations and audits that aren’t tied to clear benchmarks.

Standards establish objective benchmarks that create consistent evaluations and lead to reliable, repeatable results. Luckily, standards organizations are already working on these issues related to AI. The International Organization for Standardization (ISO) has published several AI standards, including ISO/IEC 42001, a standard for implementing AI governance controls. It is also developing others, such as ISO/IEC DIS 42006, a draft standard establishing protocols for conducting AI audits. Stakeholders have also identified other important issues, including data quality, evaluation methodologies, and standards to implement the European Union AI Act requirements.

These standards will have a global impact both on how developers build AI systems and auditors’ ability to certify against global standards. Once standards are established, the rapid pace at which AI technologies evolve will require stakeholders must continue to actively update them, so they continue to be fit for purpose.

What Should Policymakers Do?

Congress should use its important role to prioritize these efforts by:

  • Increasing funding for research and development on pre-standardization research, which can serve as a catalyst for AI standardization and innovation;
  • Increasing resources for government agencies that lead the US government’s engagement in global standards development processes, such as NIST;
  • Strengthening the technical capacity of the workforce, which will, among other things, expand the number of people capable of conducting AI evaluations and contributing to AI standards development; and
  • Codifying the AI Safety Institute, which can advance the science of AI safety and evaluation capabilities and strengthen international alliances that US stakeholders can leverage in international standards development efforts.

These measures both underscore the importance of global AI standards and ensure the right people play the right roles in developing those standards — industry provides leadership and technical expertise, and government supports these efforts.

International standards bodies are where this collaboration happens — and we should keep it that way. At the same time, governments should take additional steps to chart the path forward and invest in research, workforce development, and global cooperation. Together, fulfilling these responsibilities will enable all stakeholders to achieve a common goal: Globally interoperable standards that guide responsible AI development and use.

For more information on standards, see BSA | The Software Alliance’s policy brief on AI standards.

Author:

Shaundra Watson serves as Senior Director, Policy, in Washington, DC and is responsible for providing counsel and developing policy on key issues for the software industry, with an emphasis on privacy and artificial intelligence.

Prior to joining BSA, Watson served as an Attorney-Advisor in the Office of Chairwoman Edith Ramirez at the US Federal Trade Commission (FTC) in Washington, DC, where she advised Chairwoman Ramirez on privacy, data security, and international issues and evaluated companies’ compliance with privacy and data security laws in numerous enforcement actions. During her FTC tenure, which spanned more than a decade, Watson also served as an Attorney-Advisor in the Office of Commissioner Julie Brill, Counsel in the Office of International Affairs, and attorney in the Divisions of Privacy and Identity Protection and Marketing Practices.

In her various positions, Watson played a key role on notable privacy and security initiatives, including the negotiation of the EU-U.S. Privacy Shield; implementation of the APEC Cross-Border Privacy Rules; and policy development on the Internet of Things, big data, and expansion of the global domain name system. In recognition of her leadership on Internet policy issues, Watson received the agency’s Paul Rand Dixon award. Prior to joining the FTC, Watson was an Associate at Hogan & Hartson, LLP (now Hogan Lovells) in Washington, DC and clerked for Justice Peggy Quince at the Supreme Court of Florida.

Watson holds a privacy certification from the International Association of Privacy Professionals and serves on the organization’s Education Advisory Board. Watson is a graduate of the University of Virginia School of Law in Charlottesville, Virginia.

Leave a Reply

Your email address will not be published.

five − two =