Artificial Intelligence

BSA Publishes New Solutions for Responsible AI Policy

Policymakers worldwide continue to wrestle with how to craft rules for artificial intelligence (AI) that best harness the advantages of technology and innovation while managing AI risks.

Since 2018, BSA | The Software Alliance has worked with enterprise software companies to develop constructive ideas to share with governments and institutions around the world to address AI-related issues. BSA is now publishing our 2024 “AI Policy Solutions,” a comprehensive set of recommendations for policymakers on new and emerging artificial intelligence issues.

“AI Policy Solutions” builds upon BSA’s leadership on AI policy to offer new ideas for tackling significant issues, including:

Encouraging global harmonization
  • Pursue interoperability to develop a shared vision of risk-based approaches to AI policy.
Implementing strong corporate governance practices to mitigate AI risks
  • Support implementation of risk management programs;
  • Require impact assessments for high-risk uses;
  • Support robust testing for high-risk AI systems;
  • Distinguish between different actors in the AI ecosystem; and
  • Ensure appropriate policies and information sharing for foundation models.
Promoting innovation and creativity
  • Support copyright for human creativity assisted by AI;
  • Develop voluntary methods for creators to indicate that they do not want their artistic works used for AI training, akin to “do not crawl” tools;
  • Recognize the sufficiency of existing copyright law to remedy infringement; and
  • Enact legislation to protect creators’ name, image, and likeness.
Protecting privacy
  • Support strong, comprehensive consumer privacy laws;
  • Provide targeted right for individuals to opt-out of solely automated decisions that have legal or similarly significant effects on individuals; and
  • Encourage the development of privacy-enhancing technologies.
Facilitating government use and procurement of AI
  • Implement the National Institute of Standards and Technology Risk Management Framework;
  • Leverage multi-cloud technologies to support government use of AI;
  • Invest in AI-driven cybersecurity solutions; and
  • Enable the use of commercial sector AI applications.
Promoting transparency
  • Encourage watermarks or other disclosure methods for AI-generated content;
  • Promote the Coalition for Content Provenance and Authenticity open standard; and
  • Disclose AI interactions with consumers.
Advancing cybersecurity with AI
  • Use AI to assist in secure software development; and
  • Harness AI to improve cybersecurity risk management.
Protecting national security
  • Support narrowly tailored measures to address national security risks; and
  • Leverage AI to improve critical infrastructure.
Promoting multiple development models
  • Ensure AI policies support the development of both open-source and proprietary systems.
Supporting sound data innovation policies
  • Facilitate global data flows; and
  • Enable public use of non-sensitive government datasets.
Investing in AI research and development
  • Increase investment in research and development; and
  • Encourage research and development cooperation among different countries.
Building the workforce for an AI future
  • Improve access and support for STEM education; and
  • Expand workforce training and alternative pathways.

 

BSA’s 2024 “AI Policy Solutions” comes amid a flurry of activity on artificial intelligence policy around the world. In the United States, the White House continues its work to implement President Biden’s executive order on artificial intelligence, while House and Senate leaders study AI issues and prepare to share ideas for legislation in the coming months. US state legislatures also continue to consider hundreds of bills on a wide variety of AI topics.

Internationally, the European Union has finished substantive work on the AI Act, while governments like Japan, India, and the United Kingdom are considering their own approach to AI policy.

As a global policy and advocacy organization, BSA is engaging with its member company personnel around the world to advocate for a consistent and harmonized approach to AI that is rooted in and reflects our “AI Policy Solutions” document.

Artificial Intelligence

BSA Member Companies Discuss Addressing AI Responsibility and Accountability in Legislation

BSA held a panel discussion on how companies are managing AI risks and why tools like the NIST risk management framework (RMF) are important elements in AI legislation. Read More >>

BSA held a panel discussion on how companies are managing AI risks and why tools like the NIST risk management framework (RMF) are important elements in AI legislation. Read More >>

Artificial Intelligence

Momentum Builds in Congress for Using NIST AI RMF in Procurement to Set Rules for Responsible AI

As policymakers grapple with how best to promote the responsible development and use of AI, the RMF published by NIST is emerging as an important tool to manage AI risks. Read More >>

As policymakers grapple with how best to promote the responsible development and use of AI, the RMF published by NIST is emerging as an important tool to manage AI risks. Read More >>

Artificial Intelligence, Cybersecurity

BSA Brings Together Industry, Government, and Policy Leaders for TRANSFORM Summit

BSA | The Software Alliance brought together government officials, industry executives, and researchers to discuss tech issues at TRANSFORM, BSA’s inaugural policy summit, in Washington last week. Read More >>

BSA | The Software Alliance brought together government officials, industry executives, and researchers to discuss tech issues at TRANSFORM, BSA’s inaugural policy summit, in Washington last week. Read More >>