OpenAI shuts down Sora video app months after launch
State of play: OpenAI has shut down its Sora AI video app and API just months after launch, with no plans to integrate the features into ChatGPT. Disney simultaneously exited its $1 billion licensing deal with the company. OpenAI is now refocusing resources on a ChatGPT desktop "superapp," its Codex coding assistant, and development of an AI-powered browser. The shutdown comes as the company faces mounting pressure to demonstrate sustainable business models beyond its core text-based products.
Why it matters: The trajectory of AI video platforms remains uncertain, with unclear constraints around legal liability and strategic viability. Video generation is significantly more resource-intensive than text-based models, making it expensive to operate at scale. While systems like Sora demonstrate impressive technical capabilities, the gap between what's technically possible and what's commercially sustainable is becoming apparent. User expectations may be outpacing what AI companies can profitably deliver, forcing strategic retreats even from high-profile product launches.
The Pentagon chooses OpenAI after ban on Anthropic
State of play: The Department of Defense has signed a major AI contract with OpenAI after blacklisting its main competitor, Anthropic. Defense officials labeled Anthropic as a national security threat after the company refused to allow its technology to be used for autonomous weaponry. While OpenAI insists its agreement includes safety measures, some experts are concerned the deal lacks specific blocks on the military gathering data on U.S. citizens. Anthropic is fighting the label in court, though the Pentagon argues refusing contract terms isn't constitutionally protected.
Why it matters: This situation highlights the growing tension between government requirements and the safety policies of AI developers. It suggests that national security designations may now be used as a tool in contract negotiations, potentially forcing companies to choose between federal eligibility and their stated ethical frameworks.
Meta acquires viral AI social network Moltbook
State of play: Meta has acquired Moltbook, the viral platform where AI agents interact in a Reddit-style forum, just months after its launch. The deal brings co-founders Matt Schlicht and Ben Parr into Meta Superintelligence Labs, a research unit led by former Scale AI CEO Alexandr Wang. The acquisition proceeded despite a significant security breach discovered by the firm Wiz, which revealed that Moltbook’s infrastructure had exposed thousands of user credentials.
Why it matters: This deal confirms that Meta is moving aggressively to own the infrastructure of the "agentic web,” where AI agents, not just humans, are the primary users. It suggests that for Big Tech, the long-term value of agent-to-agent communication protocols outweighs the risks of early-stage security flaws. For brands, it signals that "agent networks" are the next frontier for social engagement.
UK launches major study on social media restrictions for minors
State of play: British officials are beginning a new research project to see how different types of social media limits affect young people. The study follows a small group of 13- to 15-year-olds as they navigate various rules, such as total bans, strict 60-minute daily caps, or losing access during late-night hours. By monitoring changes in exercise, rest, and mental health, the government hopes to decide if the UK should adopt a nationwide ban for minors similar to the one recently passed in Australia.
Why it matters: This study moves the debate from theory to data, providing the evidence needed to justify or reject a total ban for minors. If the results show significant mental health improvements, it could trigger a wave of similar age-gating laws across Europe. For businesses and creators, it signals a shift toward a more regulated digital landscape where reaching younger audiences may soon require entirely new, non-platform-based strategies.
Trial targets Instagram and YouTube’s “addictive” features
State of play: In a high-stakes trial in Los Angeles, a 20-year-old woman ("Kaley") testified that platform design features, including autoplay, infinite scroll, and notification timing, addicted her as a child and fueled severe mental health struggles. Meta CEO Mark Zuckerberg and Instagram head Adam Mosseri both testified in person, defending their platforms against claims that they were intentionally engineered to be addictive. This is the first of over 1,500 similar lawsuits to reach a jury. Closing arguments are scheduled for March 24.
Why it matters: This case is a critical test for legal responsibility in the tech sector. If the jury rules that platform design itself is a "defective product," it would create a path for plaintiffs to bypass the Section 230 legal protections that have shielded social media giants for decades. It signals a future where digital strategies focused on "hooking" users could be treated as a legal liability.
Major publishers abandon paywalls, return to advertising
State of play: Several prominent publishers, including Time, Quartz, and TechCrunch, have recently removed their digital paywalls in a shift back toward ad-supported business models. After years of prioritizing digital subscriptions, these outlets are now focusing on growing their total audience to increase advertising revenue. While some large chains like Gannett continue to use metered access, the broader industry is increasingly moving toward "membership" models that offer specific perks rather than blocking access to news articles.
Why it matters: This shift suggests that many publishers are finding it difficult to maintain growth through subscriptions alone as readers face "paywall fatigue." For organizations and communications teams, this change increases the potential reach of earned media and strategic partnerships that were previously restricted to paying subscribers. As more premium content becomes freely accessible, digital strategies will need to account for a media landscape where audience scale is once again a primary goal for major outlets.
Meta ordered to pay $375M in New Mexico child safety case
State of play: A New Mexico jury ruled that Meta knowingly harmed children's mental health and concealed child sexual exploitation, awarding $375 million in penalties for violating the state's Unfair Practices Act. Meta will appeal citing Section 230 protections. A second trial phase in May will determine whether a judge can order platform redesigns like age verification or algorithm changes. Meanwhile, a federal jury in California has deliberated for over a week on similar claims against Meta and YouTube.
Why it matters: New Mexico argued Meta should be liable for how its algorithms decide what to show users, not just user-posted content. If the May trial orders Meta to disable engagement features or implement age verification, TikTok, YouTube, and Snapchat face the same legal threat since their algorithms function identically. The social media business model depends on maximizing time-on-platform to sell ads. Courts forcing platforms to abandon addictive design could fundamentally undermine ad revenue.


(source: formula1.com) 