Artificial Intelligence

AI Standards Development: Why It Matters

Artificial intelligence (AI) has already transformed businesses in every industry, provided new ways to solve intractable societal challenges, and created popular new tools to generate text and images. Importantly, it has also ignited interest in an issue that usually doesn’t get much attention: Global standards.

AI standards development has become a popular topic for at least two reasons:

  • Standards can establish benchmarks for responsible AI to help manage AI risks; and
  • Standards can enable effective evaluation of AI systems.

Although policymakers are increasingly interested in the role of AI standards, legislative proposals often miss the mark in two key ways. First, policies sometimes require individual government agencies to create new AI standards, which fails to recognize that these are already developed by an industry-led, consensus-based process within international standards organizations. Second, some proposals require an external audit of AI systems, failing to recognize that such audits must be based on a robust set of standards. (As we explain here, the AI auditing ecosystem is not currently mature enough to support such audits.)

Who Should Create AI Standards?

One important question raised by recent legislative proposals is: Who should create AI standards? The US government has a long-standing policy of supporting private sector-led, consensus-driven, multi-stakeholder, voluntary standards development processes, and it continues to have an important role to play.

In fact, the US government often contributes to international standards development as one of many stakeholders in international standards organizations. In this capacity, it can provide foundational, pre-standardization research and use strategic global partnerships to promote global standards that are ready for adoption. For example, the US government was involved in creating important standards on two distinct issues that greatly impact our everyday lives: Child car seat restraints and enabling devices to better communicate via Wi-Fi.

This issue is so important that the National Institute for Standards and Technology (NIST) has developed an entire plan for global engagement on AI standards. This plan recognizes the many benefits of the US government’s long-standing approach to standards development, including:

  • Facilitating globally interoperable standards by avoiding country-specific standards that create fragmentation and conflicting requirements;
  • Leveraging industry’s technical expertise to inform key issues and ensuring standards can work by the people who are actually implementing them;
  • Better utilizing government resources, instead of developing duplicative standards that reinvent the wheel; and
  • Reducing market friction in global technologies to facilitate more seamless adoption.

NIST has particularly emphasized the importance of interoperable standards. Its global engagement plan recognizes that if different jurisdictions “can standardize on the same building blocks, then even if regulatory environments are not fully aligned, it is at least easier for market participants to move smoothly between markets.” For this reason, “[g]lobal cooperation and coordination on AI standards will be critical for defining a consistent or at least interoperable set of ‘rules of the road.’”

This issue is too important to take a shortcut. Rather than directing an agency to create standards on its own, the United States should maintain its approach of supporting industry-led, multi-stakeholder standards development through international standards organizations. Policymakers who want to spur the creation of AI standards can take other actions to help, as discussed below.

How AI Standards Can Help Evaluate AI Systems

Another concern raised about AI policy proposals is that they don’t rely on standards enough and instead require evaluations and audits that aren’t tied to clear benchmarks.

Standards establish objective benchmarks that create consistent evaluations and lead to reliable, repeatable results. Luckily, standards organizations are already working on these issues related to AI. The International Organization for Standardization (ISO) has published several AI standards, including ISO/IEC 42001, a standard for implementing AI governance controls. It is also developing others, such as ISO/IEC DIS 42006, a draft standard establishing protocols for conducting AI audits. Stakeholders have also identified other important issues, including data quality, evaluation methodologies, and standards to implement the European Union AI Act requirements.

These standards will have a global impact both on how developers build AI systems and auditors’ ability to certify against global standards. Once standards are established, the rapid pace at which AI technologies evolve will require stakeholders must continue to actively update them, so they continue to be fit for purpose.

What Should Policymakers Do?

Congress should use its important role to prioritize these efforts by:

  • Increasing funding for research and development on pre-standardization research, which can serve as a catalyst for AI standardization and innovation;
  • Increasing resources for government agencies that lead the US government’s engagement in global standards development processes, such as NIST;
  • Strengthening the technical capacity of the workforce, which will, among other things, expand the number of people capable of conducting AI evaluations and contributing to AI standards development; and
  • Codifying the AI Safety Institute, which can advance the science of AI safety and evaluation capabilities and strengthen international alliances that US stakeholders can leverage in international standards development efforts.

These measures both underscore the importance of global AI standards and ensure the right people play the right roles in developing those standards — industry provides leadership and technical expertise, and government supports these efforts.

International standards bodies are where this collaboration happens — and we should keep it that way. At the same time, governments should take additional steps to chart the path forward and invest in research, workforce development, and global cooperation. Together, fulfilling these responsibilities will enable all stakeholders to achieve a common goal: Globally interoperable standards that guide responsible AI development and use.

For more information on standards, see BSA | The Software Alliance’s policy brief on AI standards.

Artificial Intelligence

TRANSFORM Recap: Cohere CEO Speaks to Role of Regulation in Spurring AI Adoption

Cohere Cofounder and CEO Aidan Gomez emphasized how a combination of smart regulation and the company’s own development work can spur wider adoption of generative AI for enterprises during a featured discussion at BSA’s TRANSFORM Dialogue in Washington, DC. Read More >>

Cohere Cofounder and CEO Aidan Gomez emphasized how a combination of smart regulation and the company’s own development work can spur wider adoption of generative AI for enterprises during a featured discussion at BSA’s TRANSFORM Dialogue in Washington, DC. Read More >>

Artificial Intelligence, Cybersecurity

Palo Alto Networks on Leveraging AI Discovery and Analysis for Cyber Defense

At Palo Alto Networks, we have doubled down on this posture because we firmly believe that AI-powered cybersecurity is vital to protecting privacy, enhancing national security, and safeguarding our digital way of life. Read More >>

At Palo Alto Networks, we have doubled down on this posture because we firmly believe that AI-powered cybersecurity is vital to protecting privacy, enhancing national security, and safeguarding our digital way of life. Read More >>

Artificial Intelligence

BSA AI Solutions: Promoting AI Transparency and Trusted Content

As AI innovation continues, BSA and enterprise software companies are offering a series of solutions for policymakers around the world to build trust, promote AI transparency, and help combat deceptive information. Read More >>

As AI innovation continues, BSA and enterprise software companies are offering a series of solutions for policymakers around the world to build trust, promote AI transparency, and help combat deceptive information. Read More >>

Artificial Intelligence

BSA Publishes New Solutions for Responsible AI Policy

BSA is now publishing our 2024 “AI Policy Solutions,” a comprehensive set of recommendations for policymakers on new and emerging artificial intelligence issues. Read More >>

BSA is now publishing our 2024 “AI Policy Solutions,” a comprehensive set of recommendations for policymakers on new and emerging artificial intelligence issues. Read More >>