On June 5, the New York Times reported that the Department of Justice (DOJ) and the Federal Trade Commission (FTC) are nearing a deal to divide antitrust investigations in the AI industry. As part of the arrangement, the DOJ will investigate Nvidia’s dominant role in AI semiconductors, while the FTC will examine Microsoft and OpenAI for potential competitive advantages in large language models. Per reporting from the Wall Street Journal, the FTC has already opened an investigation into Microsoft’s deal with Inflection AI, focused on whether it was deliberately structured to avoid government review of the transaction. The move is the latest effort from the Biden FTC and DOJ to rein in the power of Big Tech, with DOJ antitrust chief Jonathan Kanter, telling the Financial Times the agency is looking into the AI sector “with urgency,” to ensure that already-dominant tech companies do not control the market, adding that he was specifically examining the “monopoly choke points and the competitive landscape.”
On June 4, the Joint Economic Committee held a hearing entitled, “Artificial Intelligence and its Potential to Fuel Economic Growth and Improve Governance.” There was near consensus from Members and witnesses that overregulating the development of AI would be detrimental, with some Members alluding to the E.U.’s AI Act as a set of overreaching regulations that the U.S. should look to avoid. The hearing highlights the disparate approaches the U.S. and E.U. have taken to regulating AI to date. A full summary of the hearing can be found here.
Scrutiny continues to mount on corporate accountability in AI. This week, current and former staff members at OpenAI, Google DeepMind, and Anthropic released an open letter, warning of growing recklessness and secrecy at some of the world’s biggest AI developers. This letter called for accountability within the frontier companies leading the AI revolution because the broad confidentiality agreements block them from voicing their concerns externally and whistleblower protections are insufficiently limited to legal protections (as these risks and concerns are not yet regulated). The New York Times spoke with William Saunders, a research engineer who left OpenAI in February, who shared, “when I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward.’”
Google announced that it is refining its “AI Overviews” search feature, which offers an AI-generated answer to user search queries, after users reported getting strange and incorrect answers from the feature such as promoting rock consumption for health benefits and using glue to keep cheese sticking to pizza. This is the second time this year that Google has backpedaled an AI product release, after the company announced structural changes to its Gemini chatbot in February after reports it produced a-historic images and blocked requests for depictions of white people. The incident comes amid scrutiny from the News Media Alliance over how AI Overviews will reduce publishers’ ability to monetize content and “lead to further decline of quality content these AI offerings rely upon to deliver accurate results.”