Tweet Artificial Intelligence

TRANSFORM Dialogue: Building Trust in the AI Era

During a featured panel at BSA’s TRANSFORM Dialogue, leaders in AI policy described how they are helping to build trust in this technology through business practices, partnerships, and regulations. Read More >>

Left to right: Miriam Vogel, President & CEO, EqualAI, and Chair, National AI Advisory Committee; Sen. James Maroney, Deputy Majority Leader, Connecticut State Senate; Jessica Nguyen, Deputy General Counsel, AI Innovation & Trust, Docusign; Hugh Gamble, Vice President, Federal Government Affairs, Salesforce; and Aaron Cooper, Senior Vice President, Global Policy, BSA | The Software Alliance.

During a featured panel at BSA | The Software Alliance’s TRANSFORM Dialogue, leaders in artificial intelligence (AI) policy described how they are helping to build trust in this technology through business practices, partnerships, and regulations.

Panelists noted that enterprise software companies compete on trust, which is developed through privacy protections and risk management programs built into their software development.

“It seems like every time we take a sip of coffee, there is a new AI technology company,” said Jessica Nguyen, Deputy General Counsel of AI Innovation & Trust at Docusign. Nguyen further pointed out how enterprise software companies distinguish themselves from other AI developers: “Trust is really a competitive differentiator. … To win the trust of customers is just so incredibly important.”

That process may involve the government making continuous efforts to understand and update regulation in a way that builds public trust, said Salesforce Vice President of Federal Government Affairs Hugh Gamble.

“I think it’s important for government, which has approached this issue in a careful way thus far, to continue to do that and understand that this isn’t a one-size-fits-all,” said Gamble. “It’s going to be an iterative process, but we need for the public to feel certainty in the products that they’re using, so that we can continue to innovate and produce things that are going to push efficiency forward.”

The leading public official coordinating AI policy work across 47 US states, Connecticut State Sen. James Maroney, took the time to address how industry and government need to work together to develop regulations and safeguards for the adoption of AI technologies.

“I think our role is to work together with industry to come up with regulation and safeguards,” said Maroney, who said rules around transparency and testing would spur AI adoption by both businesses and consumers. “So, I think our role is to convene and work together to come up with some well-thought-out — starting with risk-based — regulations.”

“I think that the best role of legislators is making sure that we realize the opportunity, making sure that there’s AI, and making sure that more people feel comfortable with this technology,” said Miriam Vogel, Chair of the National AI Advisory Committee and President and CEO of EqualAI.

Vogel added: “It’s not just digital literacy; it’s much broader than that. It’s critical thinking; it’s too many audiences. We need familiarity, and we need ownership, and we need people to feel comfortable as to what they’re using and how.”

Policy expert and BSA Senior Vice President of Global Policy Aaron Cooper moderated the second panel focused on building trust in the AI era.

“And the objective — ultimately — is you want to make sure that every player in the value chain is assessing risk and mitigating the risk of harm as best they can given the knowledge they have,” said Cooper, summarizing the conversation.

Watch the full conversation below.

Author:

Lindsay Emery is communications coordinator for BSA | The Software Alliance, based in the association's Washington, DC, office.

Leave a Reply

Your email address will not be published. Required fields are marked *

eighteen − 5 =