Tweet Artificial Intelligence, Privacy

BSA Member Roundtable: What Do We Expect From Congress on Tech Policy in 2024?

BSA asked leading advocacy professionals at its member companies for their 2024 predictions on what to expect on Capitol Hill in the coming year. Read More >>

BSA asked leading advocacy professionals at its member companies for their 2024 predictions on what to expect on Capitol Hill in the coming year.

Participants responded to one central question: “What do we expect from Congress on tech policy in 2024 and what should it look like?”

Their answers follow below.

Danielle Brown, BSA Senior Director of Legislative Strategy

We have seen AI policy emerge as one of the few areas of strong bipartisan collaboration this Congress. This year, we can expect members to continue looking at AI’s potential for innovation and the possible risks involved with this rapidly developing technology. Specifically, the NIST AI RMF and President Biden’s recent executive order have taken meaningful steps in identifying how to manage risk, but Congress has an important role to play in putting strong guardrails in place. We can expect more hearings on AI in nearly every committee and possible legislative text coming out on issues to include workforce, deepfakes, and possibly even copyright. AI presents complex and novel policy issues, but lawmakers could be ready to roll up their sleeves in a year when we may see little else accomplished in the Capitol.

Bruce Miller, BSA Senior Director of Legislative Strategy

Partisan gridlock in Congress makes difficult even “must-do” items like funding the government or renewing expiring provisions on major existing federal laws (such as the farm bill, or FAA reauthorization) difficult to accomplish. While that might make a major push on AI more difficult, as well as the 2024 election season, there are “light-touch” policy changes that may have a path forward to becoming law in 2024. Some of those more available measures include bills to harmonize conflicts in law as it relates to AI in procurement, AI regulations that primarily affect national security or address deepfakes in a way that complements existing laws or increases transparency. There is also additional work underway – ranging from dozens of hearings on issues as they relate to AI, to the introduction of somewhat comprehensive AI bills yet this year – that might not materialize as action this year, but which set the stage for the next Congress and beyond. Observers should remember that AI is changing and advancing at a rapid rate. Congress is a notoriously slow-moving institution (as the founders intended), so there will be many opportunities to continue framing the AI debate for lawmakers.

Danielle Johnson-Kutch, DocuSign Director of Government Affairs

We expect issues relating to AI, privacy, and the impact of new technologies to continue at the forefront of debate. While many congressional hearings and bills will likely focus on possible drawbacks, we also hope to see Congress consider technology policies that unlock new capabilities and value for the American people. For example, we hope to see the reintroduction of the Electronic Signatures in Global and National Commerce Act (E-SIGN Modernization Act), which creates an opportunity for Congress to recognize how much the technology has advanced in the 20+ years since the passage of the ESIGN Act, and encourage more advancement with the use of AI and digital identity verification.

David Lieberman, Bentley Systems Senior Director, US Government Relations

It is going to be difficult for Congress to get much completed on technology issues, such as data privacy and AI and machine learning, during a truncated election year. Not only are these issues new and emerging (particularly in the case of AI and ML), but there are many different viewpoints on how and what should be regulated. That said, with help from BSA and its member companies, I am hopeful that we can coalesce around a federal data privacy framework – instead of the state patchwork we have today – and the beginning of an AI/ML statutory framework that includes an understanding of how AI/ML benefits our infrastructure.

The use of AI/ML for infrastructure design, construction, and operations is a terrific story. AI has the power to de-risk an asset by providing another layer of analysis to detect things like cracks and rust on bridges. AI can also help engineers design better infrastructure assets by making design suggestions and assessing future infrastructure conditions. Here’s to an exciting year ahead!

Evangelos Razis, Workday Senior Manager, Public Policy, AI and Privacy Policy Lead

Congress’s to-do list on tech policy grows each year: pass a comprehensive data privacy law, enact sensible guardrails on AI used for high-risk decisions, and continue to modernize the federal enterprise with secure cloud software. As we head toward elections, however, expect a premium to be put on targeted, common-sense ideas that build trust and have bipartisan support. This means bills like the Federal AI Risk Management Act, which would require federal agencies and companies selling to the federal government to adopt the NIST AI Framework. Another is reauthorization of the Workforce Innovation and Opportunity Act, which lawmakers should use to retool federal programs to support an AI-ready workforce. Cyber resiliency will also be top of mind. As the Biden Administration continues to implement its cybersecurity executive order, lawmakers and software companies alike will look for clear workable rules and better outcomes from public-private partnerships across the federal government.

Bill Wright, Elastic Head of Global Government Affairs

In 2024, a hotly contested election year, Congress will intensify its focus on AI policy, building on the groundwork laid in 2023. This past year Congress hosted nine congressional insight forums and held over 100 congressional hearings. The role of technology in elections, particularly the impact of AI-generated content and misinformation on voter behavior, will take on a new urgency in 2024. The potential of foreign adversaries using generative AI, such as AI-generated deepfakes targeting politicians, to influence elections could spur Congress to enact more decisive AI regulation. However, I think the overall approach to AI regulation in the US in 2024 will look a lot like 2023 – decentralized, with a reliance on fragmented state privacy laws and judicial decisions. A unified national framework for responsible use of AI will remain out of reach in 2024.

Author:

Craig Albright serves as BSA’s Senior Vice President for US Government Relations. In this role, he leads BSA’s team that drives engagement with Congress, the Administration, and all US states. He’s responsible for developing and implementing advocacy strategy to deliver results on issues across BSA’s policy agenda.

Prior to joining BSA, Albright spent four years as the World Bank Group's Special Representative for the United States, Australia, Canada and New Zealand, managing relations with government officials, private sector executives, think tank academics, civil society leaders and others. Before that, Albright spent more than 12 years in the US government. He served in the White House as Special Assistant to President George W. Bush for Legislative Affairs and Deputy Assistant to Vice President Dick Cheney for Legislative Affairs. In Congress, his positions included Legislative Director and Chief of Staff for former Congressman Joe Knollenberg of Michigan and Chief of Staff for Congresswoman Kay Granger of Texas.

Albright has been identified as one of the Top 100 association lobbyists by The Hill news organization and one of Washington’s Most Influential People by Washingtonian magazine. He is a native of the Detroit area and holds a BA in Economics from Michigan State University.

Leave a Reply

Your email address will not be published.