Tweet Artificial Intelligence

Adobe on Fighting To Restore Trust Online

At Adobe, we deeply understand how impactful an image can be — and how this power can be used for bad as well as good. Read More >>

By Adobe Executive Vice President, General Counsel, and Chief Trust Officer Dana Rao

At Adobe, we deeply understand how impactful an image can be — and how this power can be used for bad as well as good. Bad actors are out there, intent on using manipulated, misleading or fabricated media to sow misinformation. With generative AI booming and some four billion people expected to vote worldwide this year, it has never been more important to restore trust and transparency online.

This is precisely why we co-founded the Content Authenticity Initiative (CAI) in 2019. A global coalition bringing together tech, media, NGOs, and academics, the CAI has over 3,000 members and counting, who are dedicated to giving people critical context behind what they see online.

That effort is behind Content Credentials, the digital nutrition label that shows information like who made a piece of content and when; any edits made; and, crucially, whether AI was used. Built on the Coalition for Content Provenance and Authenticity technical specification, Content Credentials are featured across Adobe tools, including Photoshop and our generative AI model, Firefly. Companies like Google, Meta, and OpenAI to Sony and Qualcomm have also been working to roll them out. We’re moving quickly towards a future where people can expect Content Credentials on digital media everywhere and know instantly whether something they see online is real or fake.

We’re also reaching even more widely across industry and government to help restore trust online. We were proud to join our peers on the AI Elections Accord, and we’ve been honored to advise on the development of important policy to create certainty and trust in AI, including President Biden’s Executive Order on AI, NIST’s launch of the US AI Safety Institute Consortium, and the European Union’s landmark AI Act. It’s going to take all of us working together to ensure AI can realize its promise and potential for good. We’re proud of our work so far, optimistic about the work ahead and heartened to have partners like the BSA | The Software Alliance as we forge a path forward.


About the author: 

Dana Rao is the Executive Vice President, General Counsel and Chief Trust Officer at Adobe. He leads Adobe’s Legal, Security, and Policy team and is charged with driving a unified strategy that leverages technology, law, and policy to strengthen Adobe’s products, services, and reputation as a company that employees and customers around the world can trust. He currently serves as a member of BSA’s Board of Directors, where he previously served as chair.

Author:

“AI Policy Solutions” contributors are the industry leaders and experts from enterprise software companies who work with governments and institutions around the world to address artificial intelligence policy. These submissions showcase the pillars outlined in BSA’s recent “Policy Solutions for Building Responsible AI” document by demonstrating how their companies are continuously working toward the responsible adoption of AI technologies.

Leave a Reply

Your email address will not be published. Required fields are marked *

9 − 7 =