In this BSA series – “Why AI?” – enterprise software leaders explain in their own words how artificial intelligence (AI) is having a positive impact on individuals, businesses, and organizations worldwide. In this submission, member of BSA’s Board of Directors and Adobe Executive Vice President, General Counsel and Chief Trust Officer Dana Rao writes about empowering creators through responsible training and deployment of AI systems.
1. Why AI?
Adobe has been investing in AI for over a decade, using it to allow our customers to unleash their creativity, perfect their craft and power their businesses in a digital world; all in line with our AI ethics principles of accountability, responsibility, and transparency. Earlier this year, we built upon that legacy with our creative generative AI model, Adobe Firefly. We are at the forefront of an incredible technological transformation and it’s essential that we take this opportunity to thoughtfully address the implications of AI as we build our future together.
Last week, Adobe announced our support for the White House Voluntary AI Commitments to promote safe, secure, and trustworthy AI. These commitments represent a strong foundation for ensuring the responsible development of AI and are an important step in the ongoing collaboration between industry and government that is needed in the age of this new technology. In addition, we made our own commitment to empower creators and creators rights in the age of AI.
2. Can you give an example?
Adobe trained our generative AI model Firefly on our own licensed Adobe Stock images, other works in the public domain, moderated generative AI content, and work that is openly licensed by the rightsholder. Using a training set like this minimizes the risk that situations like the style impersonation described above could happen. But we know that since there are other tools out there that are primarily trained off the web, the intentional misuse of AI tools could be a real issue for our creative community and is certainly a legitimate concern right now.
That’s why Adobe has proposed that Congress establish a new Federal Anti-Impersonation Right (the “FAIR” Act) to address this type of economic harm. Such a law would provide a right of action to an artist against those that are intentionally and commercially impersonating their work or likeness through AI tools. This protection would provide a new mechanism for artists to protect their livelihood from people misusing this new technology, without having to rely solely on laws around copyright and fair use. In this law, it’s simple: intentional impersonation using AI tools for commercial gain isn’t fair. The goal of the FAIR act is to protect artists from people misusing AI to intentionally impersonate their likeness for commercial gain, while also preserving the ability to evolve and innovate style. AI is transformative, let’s make it work for everyone.
3. Where can we learn more?
We outline our approach to empowering creators and enabling responsible deployment through AI on the Adobe Blog. There are still lots of questions out there about AI and how it can be trained, and in some cases, it’ll be a while before we have answers. Adobe will continue to advocate for protections and policies that support artists, and we know that solutions work best when they build off of input from the community they are intended to protect.
About the author:
Dana Rao is the Executive Vice President, General Counsel and Chief Trust Officer at Adobe. He leads Adobe’s Legal, Security, and Policy team and is charged with driving a unified strategy that leverages technology, law, and policy to strengthen Adobe’s products, services, and reputation as a company that employees and customers around the world can trust. He currently serves as a member of BSA’s Board of Directors, where he previously served as chair.