Tweet Artificial Intelligence

Enhancing AI Accountability: Effective Policies for Assessing Responsible AI

As companies across industry sectors seek to leverage the benefits of AI, they are also exploring how best to manage its risks. To help identify and manage AI-related risks, companies are building internal AI governance or risk management programs that leverage important new resources. Read More >>

Artificial intelligence (AI) development is accelerating at a record pace, and the immense benefits continue to increase. From improving cybersecurity to optimizing manufacturing to powering healthcare breakthroughs, the impact of AI is now ubiquitous. As companies across industry sectors seek to leverage the benefits of AI, they are also exploring how best to manage its risks.

To help identify and manage AI-related risks, companies are building internal AI governance or risk management programs that leverage important new resources. These include the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (RMF), the EU AI Act, and the International Organization for Standardization’s (ISO) new AI management system standard, ISO/IEC 42001.

These frameworks help organizations determine their risk tolerance and assess the context of how they use AI, including “sociotechnical” considerations, which account for how the system may interact with humans or affect broader societal issues. Frameworks can also help companies ensure they have people and processes in place to identify and manage risks, including involving senior management in key strategic decisions and mechanisms to evaluate AI systems.

One key practice in implementing an AI governance program is conducting impact assessments for high-risk AI systems. BSA | The Software Alliance highlighted the importance of impact assessments three years ago in our report on AI risk management, “Confronting Bias: BSA’s Framework to Build Trust in AI.” The NIST AI RMF also underscores the importance of impact assessments, which are a proven accountability tool already used in other fields, including privacy and environmental protection. Both developers of AI systems and deployers of high-risk AI systems can conduct impact assessments to help identify and mitigate potential risks. Naturally, those assessments will not be the same, given the distinct roles of developers and deployers and the different steps they can take to mitigate risks.

Impact assessments have several benefits:

  • They guide product engineers in considering a broad range of issues associated with AI development, including representativeness of training data and data labeling methodologies, relevant contextual issues, and rationale for approaches used to validate and test AI models;
  • They provide documentation for other internal teams to assess the system’s capabilities and performance, including whether the system poses risks that warrant escalation or further mitigation, and whether practices are consistent with internal risk management procedures; and
  • They allow companies to implement accountability practices within the organization, which protects trade secrets and other proprietary or sensitive information from being shared with third parties and avoids the risk of unauthorized disclosure, particularly for third parties that may not have adequate security. They also make implementing accountable measures more cost effective, a key priority for smaller companies with more limited personnel and resources.

Despite the substantial benefits of internal impact assessments, there is a growing chorus of voices in AI policy discussions advocating for mandatory third-party audits. Supporters argue that external safeguards are needed to promote meaningful transparency and accountability of companies developing and using AI. Policymakers have acknowledged these concerns and introduced legislation with third-party audit requirements, including in Congress, California, and Canada.

Third-party audits are not new. They are common in other areas, such as finance and cybersecurity, where strong standards and robust oversight form the building blocks of the audit ecosystem. In the financial sector, US securities laws require publicly traded companies to obtain independent audits of their financial statements, which evaluate whether companies comply with generally accepted accounting standards. Congress also put in place a robust framework to make sure these audits worked, creating the Public Company Accounting Oversight Board (PCAOB), a nonprofit organization that oversees the financial auditing of public companies and reports to the US Securities and Exchange Commission (SEC). The PCAOB, in turn, makes sure financial auditors — who are required to register — adhere to a variety of requirements. It develops and enforces professional practice standards, oversees the quality control, ethical, and independence standards auditors must follow, and conducts inspections of registered public accounting firms. The American Institute of CPAs (AICPA) also mandates professional standards for accountants, including requiring them to keep client information confidential.

In the cybersecurity context, independent audits are generally not required. However, well established standards provide common rules of the road, such as AICPA’s auditing standards and SOC 2 reports and ISO 27001, an information security management standard. There are several incentives for companies to use these tools: Customers may request SOC 2 Type 2 audits and ISO 27001 certifications to provide assurance that their data is safe, and some states provide safe harbors for complying with these standards.

There is also a process in place to make sure auditors can do the job. AICPA accredits certified public accountants (CPAs) and accounting firms licensed to conduct SOC 2 Type 2 audits through a rigorous certification process wherein CPAs must specialize in information security audits. Additionally, several states impose CPA license requirements and certain organizations, such as the American National Standards Institute (ANSI), accredit certification bodies that audit and certify compliance with ISO 27001.

In some cases, companies may decide that investing the time, energy, and resources necessary to undergo a voluntary third-party AI audit that shows compliance with international standards will help meet important business goals or provide a commercial advantage. In the AI context, however, there are several reasons why mandated third-party audits may not be workable now:

  • Immature Auditing Ecosystem. The AI audit ecosystem is immature in several key respects. It lacks: (1) comprehensive standards for AI systems and how AI audits should be conducted; (2) a robust framework for governing the professional conduct of AI auditors; and (3) sufficient resources for conducting AI audits.

1. Standards. Comprehensive standards are necessary to establish objective benchmarks and consistent evaluations that can produce reliable, repeatable results. As NIST acknowledged in its plan for global engagement on AI standards, the US government has a long-standing policy of participating in global standards development in international organizations, such as ISO, through a private sector-led, multi-stakeholder process, which both leverages industry’s technical expertise and facilitates global interoperability. ISO has published a few AI standards, including ISO/IEC DIS 42006, a draft standard establishing protocols for conducting AI audits. However, stakeholders have identified a range of other important issues to address. These include data quality and evaluation methodologies, as well as the rapid pace at which AI technologies are evolving, which will likely create ongoing challenges in ensuring standards continue to be fit for purpose.

Despite issuing a report recommending third-party requirements for high-risk uses of AI, the National Telecommunications and Information Administration (NTIA) still candidly acknowledged that the “current dearth of consensus technical standards for use in AI system evaluations is a barrier to assurance practices.” They continued that this, in turn, causes “uncertainty for companies seeking compliance, diminished usefulness of audits, and reduced assurance for customers, the government, and the public.” As a result, NTIA concluded that “[a]s a practical matter, internal evaluations are more mature and robust currently than independent evaluations.”

2. Oversight of AI Auditors. Although ANSI accredits companies to provide certifications to ISO AI standards, policymakers should also consider how additional tools could bolster oversight of AI auditors. The financial industry, as noted above, has multiple layers of protection. For example, PCAOB is an independent organization, established by Congress, that applies clear auditing standards and strong ethical rules. But it doesn’t stop there; PCAOB’s inspections enable it to actively monitor compliance and investigate and discipline auditors who don’t follow the rules. PCAOB also has an internal office that assesses the integrity and effectiveness of its own programs and can steer things in a different direction if problems with the oversight of auditors arise. The SEC also has an important role in providing an additional check on both audits and the people who conduct them.

The goal of financial audits is to protect investors, and the highly regulated context in which they are used is distinct from the AI context. However, to the extent policymakers propose third-party audit requirements, they should look to the assurances provided by existing audit frameworks to identify components necessary to build an effective and workable system.

3. Sufficient Resources. AI audits also face resource challenges. It is likely that before the industry grows, only a small number of auditors will be capable of understanding complex AI systems, which could increase how long it takes to conduct an audit. And, in some cases, a company’s own employees may already have extensive knowledge about the product under review, technical expertise, and access to interdisciplinary perspectives that would enable a comprehensive internal assessment. Along the same lines, there may be a lack of other available resources for conducting AI audits, such as publicly available testing data and access to sufficient computing resources necessary for rigorous evaluations.

  • Increased Risk of Disclosure of Confidential Information. External audits require companies to share confidential and sensitive information with third parties. This can create risks not only for companies’ intellectual property and other sensitive proprietary information, but also privacy and cybersecurity risks. For example, if the developer of an AI system must share personal data with a third-party auditor, it could be forced to transfer large amounts of information about consumers to the auditor. This has the potential to undermine privacy protections the company applies to that information, particularly where the auditor has not implemented robust privacy safeguards. Similarly, requiring companies to transfer data increases cybersecurity risks by creating new ways for bad actors to obtain that information.
  • Decreased Competition. Mandated third-party audits can also harm small- and medium-sized enterprises, which often lack resources to undertake pre-audit preparation and hire qualified third parties to conduct audits. In contrast, more companies can likely use current staff to conduct impact assessments without prohibitive costs or onerous requirements, avoiding unintended competitive implications. As other stakeholders have recognized, impact assessments are also easier for companies to leverage because they already have experience with them in other fields — like privacy — and can adapt the assessments seamlessly as technologies evolve. This agile tool can help level the playing field by enabling all companies — big and small — to increase the trustworthiness of their AI products, which is a key market indicator for increased adoption.
  • Other Options Are Available. Finally, other effective methods for regulatory oversight exist. For example, regulators can determine whether companies are identifying and mitigating risks appropriately by requesting impact assessments in the course of an investigation. This broad authority enables the government to effectively police the marketplace in a range of other contexts, and the incentives for compliance that this robust framework creates apply equally here. In addition, companies increasingly provide public information about the capabilities and key features of AI systems through mechanisms like transparency reports, model cards, datasheets, and nutrition labels, which supplement regulatory efforts to enhance transparency and accountability.

Stakeholders on different sides of this issue share the same goal: Enhancing accountability in AI deployment and use. As policymakers work toward this goal, they should consider these key questions:

  • Who is most capable of assessing the operation of complex AI systems?;
  • Does the infrastructure of the AI ecosystem currently support effective implementation of a proposed audit requirement?;
  • What is the best mechanism for protecting sensitive commercial and personal information when assessing AI practices?; and
  • What impact will a particular assessment approach have on a competitive marketplace?

The answers to these important questions lead to one unavoidable conclusion: Policymakers should not mandate third-party AI audits before there are consensus-based, international standards against which to conduct an audit. Once standards are established, they should also consider how audit requirements could implicate other issues in the AI marketplace, including the impact on smaller companies and the availability of key resources, such as data and computing resources.

Author:

Shaundra Watson serves as Senior Director, Policy, in Washington, DC and is responsible for providing counsel and developing policy on key issues for the software industry, with an emphasis on privacy and artificial intelligence.

Prior to joining BSA, Watson served as an Attorney-Advisor in the Office of Chairwoman Edith Ramirez at the US Federal Trade Commission (FTC) in Washington, DC, where she advised Chairwoman Ramirez on privacy, data security, and international issues and evaluated companies’ compliance with privacy and data security laws in numerous enforcement actions. During her FTC tenure, which spanned more than a decade, Watson also served as an Attorney-Advisor in the Office of Commissioner Julie Brill, Counsel in the Office of International Affairs, and attorney in the Divisions of Privacy and Identity Protection and Marketing Practices.

In her various positions, Watson played a key role on notable privacy and security initiatives, including the negotiation of the EU-U.S. Privacy Shield; implementation of the APEC Cross-Border Privacy Rules; and policy development on the Internet of Things, big data, and expansion of the global domain name system. In recognition of her leadership on Internet policy issues, Watson received the agency’s Paul Rand Dixon award. Prior to joining the FTC, Watson was an Associate at Hogan & Hartson, LLP (now Hogan Lovells) in Washington, DC and clerked for Justice Peggy Quince at the Supreme Court of Florida.

Watson holds a privacy certification from the International Association of Privacy Professionals and serves on the organization’s Education Advisory Board. Watson is a graduate of the University of Virginia School of Law in Charlottesville, Virginia.

Leave a Reply

Your email address will not be published. Required fields are marked *

one + seven =