From left to right: BSA Senior Director of Legislative Strategy Bruce Miller; Alteryx Senior Director, Privacy and Product Counsel and Data Protection Officer Jennifer Davide; IBM Distinguished Research Staff Member Michael Hind; and Cisco Senior Director for Technology Policy, Global Government Affairs, Eric Wenger.
As US lawmakers continue to evaluate rules for artificial intelligence (AI), BSA | The Software Alliance welcomed over 90 congressional staff and industry stakeholders for a Capitol Hill briefing on how current frameworks, like the NIST Risk Management Framework (RMF) are helping companies manage risks to ensure the responsible development of AI.
BSA Senior Director of Legislative Strategy Bruce Miller moderated a panel of BSA member company representatives. He was joined by Alteryx Senior Director, Privacy and Product Counsel and Data Protection Officer Jennifer Davide; IBM Distinguished Research Staff Member Michael Hind; and Cisco Senior Director for Technology Policy, Global Government Affairs, Eric Wenger.
In focus were methods to bolster transparency when people encounter AI, similarities between privacy governance and possible AI policy, and how their companies are using risk management frameworks to maintain high standards and best practices for risk mitigation.
“We’re saying that there are a limited set of use cases that we should define as being sufficiently high risk and this is where your attention should be focused,” Wenger said of how policymakers should take a risk-based approach to AI policy.
Hind and Wenger discussed how certain use cases of AI should be defined as high-risk to ensure that they are being properly mitigated.
“A big area in this space of trustworthy AI is transparency,” Hind explained. “Transparency means documenting what happened when you built the model. One of the main things is documenting the training data sets.”
The focus on a risk-based approach to AI regulation and clear obligations for AI developers and deployers echoed BSA CEO Victoria Espinel’s testimony to Congress earlier this month.
Hind likened the transparency documents to a nutritional label, where a consumer might not know exactly how the food is made, but instead what it is comprised of.
Davide continued by describing how policy should highlight the importance of the developer and deployer distinction when determining where risks should be mitigated.
“We have to define what those responsibilities are in relationship to where you are in the value chain,” Davide said.
The panelists closed by encouraging policymakers to require companies to have risk management programs that follow a best set of practices like the NIST RMF.
BSA continues to work with policymakers at both the federal and state level to instill guardrails for the responsible development and deployment of AI and encourage AI risk management.