Disney invests in OpenAI
State of play: Disney announced a $1 billion investment in OpenAI, becoming the first major Hollywood studio to license its intellectual property to an AI video generation platform.
The three-year partnership will allow OpenAI's Sora users to create short-form videos featuring more than 200 Disney, Pixar, Marvel, and Star Wars characters starting in early 2026, with curated user-generated content appearing on Disney+. The deal gives Disney equity in OpenAI, access to its technology for internal use, and significant control over how its characters are portrayed.
Why it matters: Disney’s move signals a turning point in how major rights holders approach AI, shifting from defensive copyright enforcement to monetized licensing and co‑creation.
For Disney, it’s a controlled way to test audience appetite for AI‑generated storytelling while keeping brand portrayal within tight guardrails. For OpenAI, it delivers high‑value IP, credibility with other content owners and a showcase market for Sora’s video capabilities. The partnership sets precedent for how studios, tech platforms, and creators might share IP in an AI economy, and will influence emerging standards for attribution, royalties, and quality control.
Google introduces instant user interfaces with Generative UI
State of play: Google has introduced Generative UI for Google AI Pro and Ultra subscribers in the U.S., a major step toward AI-designed, fully customized user experiences that go far beyond traditional text or static layouts. Rolling out first in the Gemini app and in AI Mode in Google Search, it allows Gemini models to generate complete UI on the fly, including design, code and a tailored interface for any prompt.
Why it matters: Early evaluations show users strongly prefer Generative UI interfaces over standard LLM text or markdown outputs, ranking just behind human-designed websites. In time, it may completely fragment brand identity and compress top-, mid-, and bottom-funnel touchpoints into one interface. And given, reception, it may well raise user expectations for custom, unique experiences, placing pressure on brands to not only optimize for LLMs but reinvent their entire digital strategies.
EU investigates Meta over policy change banning rival AI chatbots from WhatsApp
State of play: The European Commission has opened a new antitrust investigation into Meta over a planned WhatsApp policy that could block competing AI chatbots from accessing the platform. The probe follows complaints from smaller AI startups who argue the policy would prevent them from reaching WhatsApp’s massive European user base. Regulators are examining whether Meta’s integration of its own Meta AI assistant gives the company an unfair advantage, with EU antitrust chief Teresa Ribera signaling that interim measures may be imposed ahead of the full ruling.
Why it matters: WhatsApp is one of Europe’s most widely used communication platforms, making access to it a critical distribution channel for emerging AI assistants. If Meta is found to have blocked rivals to protect its own AI ecosystem, it could face fines of up to 10% of global annual revenue, and be forced to reopen access to third-party AI providers. More broadly, the investigation highlights growing regulatory pressure on Big Tech’s use of generative AI, especially when platform control intersects with competitive advantage.
YouTube says it will comply with Australia’s under-16s social media ban, while Reddit lawyers up
State of play: Australia’s world‑first ban on social media accounts for under‑16s took full effect on December 10, forcing ten major platforms to deactivate youth accounts and block new sign‑ups. YouTube ended a months‑long standoff by joining Meta, TikTok, and Snap in compliance, automatically logging out under‑16 users and disabling posting, commenting, or subscribing, though videos remain viewable while logged out.
Reddit is challenging the law in Australia’s High Court, arguing it infringes free expression and shouldn’t apply to the platform, which it says is fundamentally different from “social media”, describing itself as a network of public forums rather than relationship‑driven interactions.
Why it matters: Australia’s ban is emerging as a global test case for large-scale age-verification and platform access controls, and other jurisdictions are closely watching the impact. As YouTube and Reddit were keen to underline, it highlights the question of what actually constitutes social media. Regardless of that answer, it practically means far less visibility into the habits and preferences of an emerging generation of social media users.
For further context, check out what we said about the ban in November.
$300bn AI start-up Anthropic prepares blockbuster IPO
State of play: Anthropic has hired Wilson Sonsini to begin preparing for what could become one of the largest tech IPOs ever, potentially as early as 2026. The developer of the Claude chatbot is currently in talks for a funding round valuing it above $300bn and has also held early, informal discussions with major investment banks. The move comes as rival OpenAI is conducting its own preliminary IPO planning, setting up a race between the two AI labs to reach the public markets first.
Why it matters: A successful Anthropic IPO would be a landmark test of public investor appetite for AI research labs with massive costs, rapid revenue growth, and unprecedented valuations. Listing ahead of OpenAI could reshape competitive dynamics in the frontier-model sector, giving Anthropic both capital and market validation as the companies vie for leadership. More broadly, the offering would signal whether public markets are ready to absorb AI companies at valuations normally reserved for mature tech giants, and with grumblings of an AI bubble, may turn out to be a bellwether for the sector.
Nano Banana Pro pushes AI realism as watermarks fade
State of play: Google’s new Nano Banana Pro image model is crossing another threshold in synthetic media. CNET’s testing shows it can produce ultra-realistic photos, flawless textures and accurate, readable text, eliminating one of the last easy tells of AI imagery. It also generates convincing but incorrect infographics, and its visible watermark doesn’t always persist. In parallel, AI watermark-removal tools have become highly effective at stripping logos, proof marks and timestamps from images and video in seconds, leaving only invisible watermarks that most users can’t detect.
Why it matters: The combination of hyper-realistic AI imagery and increasingly frictionless watermark removal accelerates the collapse of basic visual trust. It becomes harder for audiences to identify manipulated or fully synthetic content, especially when mistakes look authoritative. This reinforces what we highlighted in our earlier deepfake analysis: organizations, platforms, and the public are entering a landscape where “seeing” is no longer verification, and where strategic communications, crisis readiness, and provenance tools will be tested more often, and more severely.
This doesn’t mean everyone is for it – McDonald’s had to pull an AI-generated ad over the intense backlash it received.


