Tweet Artificial Intelligence

BSA AI Solutions: Promoting AI Transparency and Trusted Content

As AI innovation continues, BSA and enterprise software companies are offering a series of solutions for policymakers around the world to build trust, promote AI transparency, and help combat deceptive information. Read More >>

The advent of AI-enabled tools has prompted questions about how malicious actors might exploit artificial intelligence to exacerbate misinformation, create and spread disinformation, and ultimately undermine trust in institutions.

Both the public and watchdog groups need trusted tools to discern what is fake and what is real. This is why BSA | The Software Alliance supports policymakers’ efforts to promote the use of disclosure methods for AI-generated content like watermarking, content provenance, and fingerprinting. Ensuring these methods are widely adopted and reliable might not be simple, but should be a priority for governments and businesses alike.

Establishing successful disclosure methods for AI need not reinvent the wheel. Standards for these methods can build upon the progress made by companies and government entities, which will each need reliable and successful tools to use collectively.

At the same time, building tools that help the public recognize AI models and identify when they interact with AI-generated content or AI-enabled tools fosters a better understanding of artificial intelligence. As AI innovation continues, BSA and enterprise software companies are offering a series of solutions for policymakers around the world to build trust, promote AI transparency, and help combat deceptive information.

BSA Supports:

  • Encouraging watermarks or other disclosure methods for AI-generated content;
  • The Coalition for Content Provenance and Authenticity’s (C2PA) efforts to develop standards for content authenticity and provenance; and
  • Disclosures for AI interactions with consumers.

Why It Matters:

The policies supported by BSA ultimately help the public better understand what content is human-generated and what is AI-generated.

The approach of using watermarks and other disclosure methods for AI-generated content, combined with the C2PA standard for content authenticity, provides secure, indelible provenance for content, and helps the public decide what is trustworthy. Disclosing to consumers when they interact with AI and preparing AI vendors to provide some explanation about AI models and outcomes is an additional step to build trust in AI and understanding about its uses.

What’s Going on:

Industry and policy leaders are advancing additional commitments and solutions designed to build trust in AI and address related issues:

  • The October 2023 White House Executive Order on Artificial Intelligence tasked the National Institute for Standards and Technology with establishing technical standards for tools to discern authentic content versus AI-generated content.
  • In 2021, media and technology companies formed C2PA to work together on technical standards for content authentication.
  • BSA welcomed the Munich Security Conference’s Tech Accord to Combat Deceptive Use of AI in 2024 Elections, an important, private-sector commitment to take action against the misuse of AI.
  • The European Union AI Act requires deployers of systems manipulating images, audio, or video constituting a deepfake to disclose when content is artificially generated or manipulated.
  • The G7’s Hiroshima Process International Guiding Principles address these important transparency principles for AI developers and deployers.

Learn More:

  • Adobe Executive Vice President, General Counsel and Chief Trust Officer explains Adobe’s effort behind creating the Content Authenticity Initiative and how to give people critical context behind what they see online.
  • IBM Chief Privacy & Trust Officer Christina Montgomery writes about what IBM is doing to enact AI ethics governance, and the policies needed to combat deepfakes related to elections.

Read “BSA’s Policy Solutions for Building Responsible AI” to learn more about the comprehensive set of recommendations for policymakers worldwide.

Author:

“AI Policy Solutions” contributors are the industry leaders and experts from enterprise software companies who work with governments and institutions around the world to address artificial intelligence policy. These submissions showcase the pillars outlined in BSA’s recent “Policy Solutions for Building Responsible AI” document by demonstrating how their companies are continuously working toward the responsible adoption of AI technologies.

Leave a Reply

Your email address will not be published. Required fields are marked *

2 × 4 =