Tweet Artificial Intelligence

IBM’s AI Ethics Governance Approach to Deepfakes

It is essential for both policymakers and technology companies to pursue solutions that address deepfakes. That’s why IBM has developed and implemented an AI ethics governance approach to ensure AI that we develop, use, and deploy is trustworthy. Read More >>

By IBM Vice President and Chief Privacy & Trust Officer Christina Montgomery

This year, more than four billion people are eligible to vote as over 40 countries hold national elections. Before voters head to the polls, policymakers must prohibit the distribution of election-related deepfake content that deceptively impersonates candidates or spreads false information about voting.

Deepfakes have quickly become one of the most pressing societal challenges due to their potential to deceive the public and undermine democracy.

It is essential for both policymakers and technology companies to pursue solutions that address deepfakes. That’s why IBM has developed and implemented an AI ethics governance approach to ensure AI that we develop, use, and deploy is trustworthy. Further, IBM signed the Munich Security Conference’s Tech Accord to Combat Deceptive Use of AI in 2024 Elections and supports the White House’s Voluntary AI Commitments and Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.

IBM has long advocated for regulations that precisely target harmful applications of technology. By applying this approach to this specific, high-risk application of AI, policymakers can build firm guardrails that protect free and fair elections, which are foundational to democracy everywhere.

We call on policymakers globally to prohibit the distribution of materially deceptive deepfake content related to elections. In the United States, this can be accomplished by passing the bipartisan Protect Elections from Deceptive AI Act, which would ban the use of AI to generate content falsely depicting federal candidates in political advertisements with the intent of influencing an election. European Union policies from the Digital Services Act to the AI Act — which IBM has long supported — impose requirements for consumer-facing platforms to mitigate against “systemic risks for electoral processes” and for transparency when certain content is not authentic.

Now is the time for policymakers in the US and the rest of the world to act.


About the author: 

Christina Montgomery is Vice President and Chief Privacy & Trust Officer at IBM, where she oversees the company’s privacy program, compliance and strategy on a global basis, and directs all aspects of IBM’s privacy policies.

Author:

“AI Policy Solutions” contributors are the industry leaders and experts from enterprise software companies who work with governments and institutions around the world to address artificial intelligence policy. These submissions showcase the pillars outlined in BSA’s recent “Policy Solutions for Building Responsible AI” document by demonstrating how their companies are continuously working toward the responsible adoption of AI technologies.

Leave a Reply

Your email address will not be published.