Tweet Artificial Intelligence

Docusign on AI Governance and Earning User Trust

At Docusign, our customers trust us to support their most important transactions, so we understand how important it is to demonstrate trustworthiness every day. And we know that the success of AI, like most innovations, will depend on trust. Read More >>

By Karen Yankosky, Deputy General Counsel, Compliance

No one misses having to appear in person to sign an agreement. Electronic signature eliminated that inefficiency and launched the transformation of the entire agreement process by enabling users to conduct critical transactions with just their mobile phone or email. AI promises to bring even more innovation to the agreement space, from how we create and negotiate agreements to how we manage them.

While AI delivers benefits like accelerating tasks and routines, it may also introduce risks – ethical, security, privacy, intellectual property, and others. To earn user trust, organizations deploying AI must be transparent about how they will address and manage these risks. This includes defining the guardrails and controls they have implemented and demonstrating that those controls work in practice, including by telling users how their data could be used to train models, specifying the sources of data the provider uses to train its models, and describing how the provider checks the output for quality, undergoing internal and external audits, and investing in sufficient resources to drive good governance.

Simply put: Users will not trust and use AI systems unless they see evidence that these specific guardrails and controls exist. AI governance is not a one-size-fits-all proposition: The form it takes will vary across organizations and depend on the tool(s) being deployed. Docusign leverages the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, a globally recognized set of standards that help organizations of all sizes identify and address AI risk. As the foundation of our AI governance, Docusign follows the same robust privacy, security, and data governance processes that have led over 1.5 million customers and more than 1 billion users to trust us with their agreements.

Organizations that deploy AI responsibly understand they must earn user trust through transparency and governance that manages risks effectively. At Docusign, our customers trust us to support their most important transactions, so we understand how important it is to demonstrate trustworthiness every day. And we know that the success of AI, like most innovations, will depend on trust.


About the author: 

Karen Yankosky is Deputy General Counsel, Compliance at Docusign.

Author:

“AI Policy Solutions” contributors are the industry leaders and experts from enterprise software companies who work with governments and institutions around the world to address artificial intelligence policy. These submissions showcase the pillars outlined in BSA’s recent “Policy Solutions for Building Responsible AI” document by demonstrating how their companies are continuously working toward the responsible adoption of AI technologies.

Leave a Reply

Your email address will not be published. Required fields are marked *

12 − twelve =