Congress and federal agencies continue their efforts to address AI safety and trustworthiness. In addition, the Biden Administration, which has made addressing competition issues across industry a core part of its economic agenda, has indicated that it is starting to look into competition in the AI industry. Most notably since our last newsletter:
The Senate Judiciary Subcommittee on Intellectual Property held a hearing focused on a discussion draft of the NO FAKES Act, legislation that would shield artists from unauthorized digital replicas of themselves. During the hearing, Members questioned witnesses from the movie, music, radio, and television industries about whether the proposal strikes the right balance between protecting artists, upholding First Amendment protections, and ensuring that AI innovation continues. Members also inquired about AI watermarking and holding streaming platforms and Big Tech companies more accountable. Subcommittee Chair Chris Coons (D-DE), a lead sponsor of the bill, stated he intends to formally introduce the legislation later this month. A full summary of the hearing can be found here.
On April 29, the Department of Commerce announced several new actions to implement President Biden’s Executive Order on AI and develop guidance for the safe deployment of AI, including: (1) the National Institute of Standards and Technology’s (NIST) released four draft publications intended to help improve the safety, security, and trustworthiness of AI systems; (2) NIST also launched a challenge series (NIST GenAI) to develop methods to distinguish between content produced by humans and content produced by AI; and (3) the U.S. Patent and Trademark Office (USPTO) published a request for comment seeking feedback on how AI could affect determinations of patent eligibility under U.S. law.
Bloomberg reported that the U.S. Justice Department will convene a workshop at Stanford University on May 30, bringing together industry leaders, researchers, and government officials to discuss competition concerns in the AI industry. The workshop comes as the Biden Administration’s antitrust agencies are pursuing cases against some of the larger tech platforms, including Google, Amazon, and Meta.
Additionally, while the policy conversations continue, in the private sector there appears to be a growing fork in the road for how companies navigate mounting IP and copyright issues resulting from large language model training. For example, on May 7, OpenAI announced that the company will launch a new tool in 2025 called Media Manager that allows content creators to opt out their work from the company’s AI development. The announcement comes as OpenAI is fighting lawsuits from artists, writers, and publishers who allege the company inappropriately used their work to train the algorithms behind ChatGPT and other AI systems. Alternatively, the Financial Times (FT) announced a strategic partnership and licensing agreement with OpenAI to incorporate FT’s journalism into ChatGPT’s models. And just one day later, eight daily newspapers owned by Alden Global Capital sued OpenAI and Microsoft, accusing the tech companies of illegally using news articles to power their AI chatbots.