Next week, the Senate Commerce Committee is presently slated to consider two pieces of bipartisan legislation concerning AI innovation. On the markup agenda is the Future of AI Innovation Act, introduced by Commerce Committee Chair Maria Cantwell (D-WA), along with Sens. Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN), which aims to boost public and private sector AI investment and research, and support ongoing work at the National Institute of Standards and Technology’s AI Safety Institute on voluntary standards for AI. Also on the agenda is the Create AI Act, led by Sens. Martin Heinrich (D-NM), Todd Young (R-IN), Cory Booker (D-NJ), and Mike Rounds (R-SD), which would establish the National Artificial Intelligence Research Resource (NAIRR) as a shared national research infrastructure, facilitating broader access to tools for AI development.
Concerns over the dangers of deepfakes was the subject of a recent hearing before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Discussion touched on the potential impact of deepfakes and misinformation on the upcoming presidential election, strategies to mitigate, pending AI legislation, Section 230 reform, and educational campaigns to protect people from malicious uses of AI. During the hearing, Sen. Amy Klobuchar (D-MN) announced that the Senate Rules and Administration Committee, which she chairs, will soon mark up two bills related to AI-generated content and political ads, and requiring disclaimers on ads that are not deepfakes. A full summary of the hearing can be found here.
On April 26, the Department of Homeland Security (DHS) announced the establishment of the Artificial Intelligence Safety and Security Board (the Board) to advise the federal government on how to protect the nation’s critical services – such as pipelines, power grids, and transportation services – from “AI-related disruptions.” The Board includes 22 representatives from a range of sectors, including software and hardware companies, critical infrastructure operators, public officials, the civil rights community, and academia. The establishment of the Board, which is slated to meet for the first time in May, is part of a directive from President Biden’s October 2023 AI EO and comes after a DHS report released late last year warned that AI enables U.S. adversaries to launch “faster, efficient, and more evasive cyber attacks.”
The UK’s Competition and Markets Authority (CMA) is scrutinizing partnerships between Big Tech companies and AI startups for competition concerns. On April 24, the CMA announced it was gathering information from market players related to the $4 billion collaboration between Amazon and Anthropic as well as Microsoft’s partnerships with Mistral and Inflection AI, a startup that lost its chief executive and most of its staff to Microsoft last month. The announcement follows the publishing of an update paper by the CMA that identified concerns related to how the market dominance of Big Tech firms – particularly Google, Apple, Microsoft, Meta and Amazon – could lead to negative outcomes and “winner-takes-all” dynamics in the emerging AI market. The CMA is already examining the partnership between Microsoft and OpenAI.