Artificial Intelligence

Unpacking the AI Stack: AI Integrators in Focus

Delivering best-in-class artificial intelligence (AI) services involves a multi-layered supply chain of companies – developers who build original models, integrators that incorporate AI into software applications, and deployers who put systems into use in the real world.

With AI, one thing is constant: change. The technology continues to reach new frontiers. Businesses continue to leverage AI in new and innovative ways. And the global AI regulatory landscape continues to evolve as policymakers increasingly propose obligations to strengthen AI governance – an emerging but critical strategy to put people and processes in place to identify and mitigate AI risks.

As policymakers seek to address AI issues against this dynamic backdrop, one guiding principle can help them develop effective approaches while navigating these changes: obligations should be risk-based and fit the company’s role in the AI ecosystem.

Why Distinctions Matter in the AI Supply Chain

The distinctions among developers, integrators, and deployers in the AI chain are critical. Companies take different steps in each phase of the AI lifecycle, and it affects both the information they have access to and their ability to fix issues when they arise.

To be effective, policies should acknowledge the different capabilities companies have, based on their role, to assess and mitigate different risks. All companies should act responsibly, but legal requirements and best practices should be tailored to the company’s role.

In Focus: Tailoring Obligations for Integrators

Policies should account for relevant roles in a range of settings. Some legislation, including in the EU, regulates not only high-risk uses of AI, but also “general purpose” AI (GPAI) models. GPAI models are not designed for a specific purpose and are highly adaptable for a wide range of uses, such as generative AI. The focus on these models, fueled by the widespread popularity of applications powered by GPAI models that can summarize documents or generate images, has highlighted an important component of the AI supply chain – integrators.

Integrators leverage original AI models to power new features in their software applications, such as an AI assistant in a video communications platform that summarizes notes from a meeting, or a retailer’s AI-powered customer service chatbot that can resolve problems with online orders or answer common questions, such as explaining the company’s refund policy. In some cases, the integrators may tweak the model so that it works better with the specific features on the video platform or understands the retailer’s line of business to make its communication with customers more useful.

Important questions have emerged about how new and pending AI rules should apply to AI integrators. For example, the EU AI Office is currently finalizing how it will address EU AI Act requirements implicating these issues in its Code of Practice for GPAI models and accompanying guidance. Here, the question is: When do changes to a GPAI model trigger obligations as the “provider” – the developer – of the model?

Policymakers should account for three key considerations when answering this question. First, a high threshold for triggering those obligations is necessary so that low-risk fine-tuning or routine modifications to GPAI models are not swept into overbroad requirements more suitable for original developers. Second, quantitative thresholds, such as the amount of computing power used or money invested, do not map to actual risks and, as a result, qualitative, not quantitative thresholds, are more appropriate. Third, companies that make changes to GPAI models that integrate them directly into software applications and do not place them on the market as a stand-alone product should not be considered the GPAI model provider.

Integrators that do not meet the thresholds for assuming obligations as the developer should still have appropriate responsibilities for the role they play. For example, they should share relevant, non-sensitive information about the changes they make to the model to the next company in the supply chain so that it can understand the AI’s capabilities and limitations once it is used in the real world. To aid in this effort, BSA created best practices and a template for this information-sharing last year. In addition, integrators should continue to maintain responsible data governance practices and ensure robust cybersecurity.

To the extent that policymakers introduce additional obligations on companies making changes to original models, such as those in the EU, a key guiding principle is that they should be proportionate to the change that the integrator makes and the risk that the integrator introduces.

Recognizing Integrators Helps AI Adoption

It’s important to get this right. When policymakers recognize the appropriate role of different AI actors, including AI integrators, everyone benefits. Companies assume obligations they are capable of fulfilling. Integrators – many of whom are small and medium-sized companies – have the flexibility to continue to provide new AI-enabled features, spurring more AI adoption. Businesses, governments, and consumers gain access to AI solutions that transform their product delivery, citizen services, and day-to-day lives.

As we ride the waves of change in the evolving AI technological and regulatory landscape, one simple but enduring principle can help policymakers develop AI governance approaches that remain effective amid constant change: ensure obligations on AI actors fit the role.

Cybersecurity, Tech-à-Tech

Tech-à-Tech Featuring Akamai’s Christian Borggreen

In this episode of Tech-à-Tech, Christian Borggreen, Head of Public Policy for EMEA and LATAM at Akamai Technologies, lifts the lid on the internet’s hidden scaffolding. Read More >>

In this episode of Tech-à-Tech, Christian Borggreen, Head of Public Policy for EMEA and LATAM at Akamai Technologies, lifts the lid on the internet’s hidden scaffolding. Read More >>

Artificial Intelligence

How BSA Members Are Rewriting the Playbook on Sports With AI

The world of sports is moving at a groundbreaking pace thanks to the innovative insights from artificial intelligence. Experts from Business Software Alliance member companies hosted congressional staff this week on Capitol Hill to discuss how their AI tools are affecting athlete injury prevention, stadium design and fan engagement. Read More >>

The world of sports is moving at a groundbreaking pace thanks to the innovative insights from artificial intelligence. Experts from Business Software Alliance member companies hosted congressional staff this week on Capitol Hill to discuss how their AI tools are affecting athlete injury prevention, stadium design and fan engagement. Read More >>

Procurement

FedRAMP 20x: Addressing Industry Concerns While Increasing Speed and Industry Outreach

In the ever-evolving landscape of government cloud security, speed and simplification are needed to make cloud useful. BSA previously cautioned against complex cloud authorizations that took too long while inhibiting government adoption. Read More >>

In the ever-evolving landscape of government cloud security, speed and simplification are needed to make cloud useful. BSA previously cautioned against complex cloud authorizations that took too long while inhibiting government adoption. Read More >>