Senate AI Working Group releases roadmap: On Wednesday, May 15, the Bipartisan Senate AI Working Group led by Majority Leader Chuck Schumer (D-NY) alongside Senators Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-IN) released their highly anticipated roadmap for artificial intelligence policy. The roadmap, which is meant to serve as a guide for committees to turn into specific legislation, includes support for a comprehensive federal data privacy law, recommends non-defense AI innovation investments of at least $32 billion per year, and offers several other policy suggestions regarding workforce training, language gaps in existing laws, and standards for AI use. The roadmap has received mixed reviews off the Hill, with civil society groups suggesting it favors the tech industry and lacks guardrails for AI development and IP protections and does not protect against discrimination, while industry organizations such as TechNet and BSA | The Software Alliance praised the roadmap for providing a path to strengthen American innovation and leadership over AI technologies.
Senate Rules Committee passes three AI bills: On May 15, the Senate Rules Committee, chaired by Senator Amy Klobuchar (D-MN), passed three AI-related bills through Committee. Two of the bills under consideration – the Protect Elections from Deceptive AI Act, which would ban the use of AI to generate materially deceptive content falsely depicting federal candidates in political ads, and the AI Transparency in Election Act, which would require disclosures on AI-manipulated political ads – advanced via party-line votes, with Republicans expressing concern that the bills are “overreaching” and would impose vague and burdensome regulations. The third bill, the Preparing Election Administrators for AI Act, which would develop voluntary guidelines for election administrators for handling AI, received bipartisan support and passed out of Committee unanimously. The markup highlights Congress’ particular focus on the potential for AI to disrupt elections. However, there is still a divide between Republicans and Democrats that will make passing legislation difficult. A full summary of the markup can be found here.
Colorado enacts landmark AI law: On May 17, Colorado became the first state to sign a sweeping AI law regulating the private sector’s use of AI. S.B. 205, creates consumer protections against potential discrimination when AI is used in “consequential” decisions around education, housing, health care, and other services, and requires deployers of high-risk AI systems to implement risk management policies and inform consumers when the technology is being used. This new law, signed by Democratic Governor Jared Polis, offers a possible model for other states to consider amid a lack of legislative action from Congress.
OpenAI is back in the hot seat: On May 20, Scarlett Johansson threatened legal action against OpenAI claiming that one of the voices for ChatGPT unveiled in September sounded “eerily similar” to her own, even though Johansson had declined OpenAI CEO Sam Altman’s request to use her voice. On May 19, before Johansson made her public statement, OpenAI announced it would pull the Sky voice from ChatGPT and also published a blog post contending that the voice was not intended to sound like Johansson. The controversy over the Sky voice comes on the heels of the company announcing the dissolution of a team focused on mitigating long-term risks in the development of AI following the departure of the group’s two leaders, including OpenAI co-founder and chief scientist Ilya Sutskever. Whether these events ultimately impact OpenAI beyond the recent spate of negative media coverage remains to be seen. According to CNN, “legal experts say Johansson may have a powerful and credible claim in court if she does decide to sue.” But the ordeals are another powerful example of the tension between AI companies racing to push new, innovative products to market and consumers and content creators calling for safe and ethical AI development.