Tweet Artificial Intelligence

Unpacking the AI Stack: AI Integrators in Focus

The distinctions among developers, integrators, and deployers in the AI chain are critical. Companies take different steps in each phase of the AI lifecycle, and it affects both the information they have access to and their ability to fix issues when they arise. Read More >>

Delivering best-in-class artificial intelligence (AI) services involves a multi-layered supply chain of companies – developers who build original models, integrators that incorporate AI into software applications, and deployers who put systems into use in the real world.

With AI, one thing is constant: change. The technology continues to reach new frontiers. Businesses continue to leverage AI in new and innovative ways. And the global AI regulatory landscape continues to evolve as policymakers increasingly propose obligations to strengthen AI governance – an emerging but critical strategy to put people and processes in place to identify and mitigate AI risks.

As policymakers seek to address AI issues against this dynamic backdrop, one guiding principle can help them develop effective approaches while navigating these changes: obligations should be risk-based and fit the company’s role in the AI ecosystem.

Why Distinctions Matter in the AI Supply Chain

The distinctions among developers, integrators, and deployers in the AI chain are critical. Companies take different steps in each phase of the AI lifecycle, and it affects both the information they have access to and their ability to fix issues when they arise.

To be effective, policies should acknowledge the different capabilities companies have, based on their role, to assess and mitigate different risks. All companies should act responsibly, but legal requirements and best practices should be tailored to the company’s role.

In Focus: Tailoring Obligations for Integrators

Policies should account for relevant roles in a range of settings. Some legislation, including in the EU, regulates not only high-risk uses of AI, but also “general purpose” AI (GPAI) models. GPAI models are not designed for a specific purpose and are highly adaptable for a wide range of uses, such as generative AI. The focus on these models, fueled by the widespread popularity of applications powered by GPAI models that can summarize documents or generate images, has highlighted an important component of the AI supply chain – integrators.

Integrators leverage original AI models to power new features in their software applications, such as an AI assistant in a video communications platform that summarizes notes from a meeting, or a retailer’s AI-powered customer service chatbot that can resolve problems with online orders or answer common questions, such as explaining the company’s refund policy. In some cases, the integrators may tweak the model so that it works better with the specific features on the video platform or understands the retailer’s line of business to make its communication with customers more useful.

Important questions have emerged about how new and pending AI rules should apply to AI integrators. For example, the EU AI Office is currently finalizing how it will address EU AI Act requirements implicating these issues in its Code of Practice for GPAI models and accompanying guidance. Here, the question is: When do changes to a GPAI model trigger obligations as the “provider” – the developer – of the model?

Policymakers should account for three key considerations when answering this question. First, a high threshold for triggering those obligations is necessary so that low-risk fine-tuning or routine modifications to GPAI models are not swept into overbroad requirements more suitable for original developers. Second, quantitative thresholds, such as the amount of computing power used or money invested, do not map to actual risks and, as a result, qualitative, not quantitative thresholds, are more appropriate. Third, companies that make changes to GPAI models that integrate them directly into software applications and do not place them on the market as a stand-alone product should not be considered the GPAI model provider.

Integrators that do not meet the thresholds for assuming obligations as the developer should still have appropriate responsibilities for the role they play. For example, they should share relevant, non-sensitive information about the changes they make to the model to the next company in the supply chain so that it can understand the AI’s capabilities and limitations once it is used in the real world. To aid in this effort, BSA created best practices and a template for this information-sharing last year. In addition, integrators should continue to maintain responsible data governance practices and ensure robust cybersecurity.

To the extent that policymakers introduce additional obligations on companies making changes to original models, such as those in the EU, a key guiding principle is that they should be proportionate to the change that the integrator makes and the risk that the integrator introduces.

Recognizing Integrators Helps AI Adoption

It’s important to get this right. When policymakers recognize the appropriate role of different AI actors, including AI integrators, everyone benefits. Companies assume obligations they are capable of fulfilling. Integrators – many of whom are small and medium-sized companies – have the flexibility to continue to provide new AI-enabled features, spurring more AI adoption. Businesses, governments, and consumers gain access to AI solutions that transform their product delivery, citizen services, and day-to-day lives.

As we ride the waves of change in the evolving AI technological and regulatory landscape, one simple but enduring principle can help policymakers develop AI governance approaches that remain effective amid constant change: ensure obligations on AI actors fit the role.

Author:

Shaundra Watson serves as Senior Director, Policy, in Washington, DC, and is responsible for providing counsel and developing global policy on key issues for the software industry, with an emphasis on artificial intelligence.  In a previous BSA role, Watson also led BSA's engagement on global privacy issues.

Watson has spearheaded BSA’s contributions to key dialogues with US and global policymakers, including through written comments on AI and privacy regulatory proposals; thoughtful contributions on best practices on AI governance; and as an expert speaker in key policy engagements, including the US Federal Trade Commission (FTC) hearings examining privacy approaches, a forum in India with policymakers on development of India’s privacy law, and a briefing on AI for Members of Congress.

Watson rejoined BSA after serving as a corporate in-house senior privacy and information security counsel for a Fortune 500 global entertainment company, where she advised business and technology units on CCPA and GDPR implementation and led development of global privacy compliance strategies.  

Prior to joining BSA, Watson served as an Attorney-Advisor in the Office of Chairwoman Edith Ramirez at the FTC in Washington, DC, where she advised Chairwoman Ramirez on privacy, data security, and international issues and evaluated companies’ legal compliance in over 50 enforcement actions. During her FTC tenure, which spanned more than a decade, Watson also served as an Attorney-Advisor in the Office of Commissioner Julie Brill, Counsel for International Consumer Protection in the Office of International Affairs, and an attorney in the Divisions of Privacy and Identity Protection and Marketing Practices.

In her various FTC positions, Watson played a key role on notable privacy, security, and consumer protection initiatives, including negotiating and/or implementing flagship programs advancing global data transfers, such as the  EU-U.S. Privacy Shield and APEC Cross-Border Privacy Rules, serving on the global expert committee conducting a review of the OECD’s seminal privacy guidelines, and contributing to influential policy reports -- by both the FTC and multilateral fora -- shaping responsible data practices in the context of emerging technologies. In recognition of her leadership on Internet policy and global domain name issues, Watson received the FTC's prestigious Paul Rand Dixon award. 

Prior to joining the FTC, Watson was an Associate at Hogan & Hartson, LLP (now Hogan Lovells) in Washington, DC, where she handled commercial litigation, international trade, and intellectual property matters.  

Watson has been active in advancing dialogues on privacy and AI, formerly serving on IAPP’s Education Advisory Board and the ABA’s big data task force, and as a current member of the Atlantic Council’s Transatlantic Task Force on AI and the National Bar Association’s privacy, security, and technology law section. Watson has also been a law school guest lecturer on international privacy. 

Watson clerked for Justice Peggy Quince at the Supreme Court of Florida and is a graduate of the University of Virginia School of Law in Charlottesville, VA.

Leave a Reply

Your email address will not be published. Required fields are marked *

one + three =