Sam Altman’s sudden dismissal, and reinstatement, raises questions over AI future
State of Play: It had been a busy week for the world of AI and in particular for OpenAI and Microsoft. On Friday, November 17, OpenAI’s board announced the sudden dismissal of Sam Altman, the CEO and very much the face of OpenAI, citing a lack of candor in his communications with the board.
This was followed by a number of high-profile resignations from OpenAI and, more dramatically, a threat by the overwhelming majority of OpenAI’s staff to resign unless the board members responsible for Altman’s dismissal themselves stepped down. OpenAI then announced that ex-Twitch CEO Emmett Shear will replace Altman in the interim. After OpenAI’s key shareholder Microsoft had announced that it was in turn hiring Altman and his co-founder Greg Brockman, OpenAI ultimately confirmed Altman would return as chief executive under a new board.
Why it matters: Though not over, the current turmoil at OpenAI, itself valued at $90 billion, has far-reaching implications. This unfolding drama sheds light on the challenges of corporate governance and ethical leadership in rapidly evolving tech sectors. The situation raises wider questions:
OpenAI's stability: Altman may be back, but can OpenAI recover from this upheaval and maintain investor and market confidence as a stable and innovative entity?
Microsoft's strategic play: Microsoft's quick moves to recruit Altman and to start looking to pull talent out of OpenAI underscore that Microsoft is here for the long game, with OpenAI or without. The turmoil at OpenAI may well cement its influence over the firm.
Pace of AI development and governance: One of the rumored reasons for this dismissal was a focus on commercial expansion at the cost of safety implications and ethical considerations – this tension between governance and growth is unlikely to go away.
Talent dynamics: The risk of talent migration is a crucial factor. Tech layoffs may still feel recent, but the events of the last few days underline how desperate the tech giants, like Microsoft and Salesforce, are for top talent.
Broader industry impact: This situation could set a precedent in AI development and corporate partnerships, influencing investment strategies and ethical considerations across the tech sector.
The resolution of this situation will not only decide the fate of OpenAI and Microsoft's AI aspirations but also potentially reshape the contours of the AI industry and remains a space to watch.
Geoffrey Hinton states humanity is at a “turning point” regarding AI
State of Play: Computer scientist and AI expert Geoffrey Hinton, also known as the “godfather of artificial intelligence”, argues on CBS’ 60 Minutes show that AI systems are better at learning than humans and will eventually have self-awareness and consciousness. He highlights risks such as fake news, unintended bias, and autonomous battlefield robots, and calls for experiments, regulations, and a treaty to ban military robots.
Why it matters: The amount of (re)productions of biases, inaccuracies, and fake content is growing at rapid speed, and so are the challenges for corporates facing a reality in which they are subject to such risks. Constantly educating leadership teams, staff and other stakeholders and raising awareness is becoming a key task for communicators across sectors and regions.
‘Ransomware as a service’ underlines need for cybercrime preparedness
State of Play: Ransomware group LockBit has been identified as the gang responsible for a severe ransomware attack on the world's largest lender by total assets, the Industrial and Commercial Bank of China (ICBC). LockBit is known as a "ransomware as a service" enterprise and has been active since at least the start of 2020, extorting more than $100 million in ransom demands from as many as 1,000 victims globally.
Why it matters: Ransomware attacks have become ever more frequent and sophisticated in recent years. The example of LockBit illustrates the professionalization of what one could call an industry by now. The incident underpins the urgency to continuously develop strong countermeasures and crisis response strategies, particularly for system-critical industries.
Uncertainty about future AI business models
State of Play: The AI industry is in a critical position as it seeks to find sustainable business models. Microsoft is reportedly losing money on GitHub Copilot, with the platform costing up to $80 per user per month, while charging users just $10 a month. Meanwhile, OpenAI is reported to have suffered a $540 million loss in 2022, though it is expected to generate $200 million in revenue this year and $1 billion by 2024, according to sources.
Why it matters: The advent generative AI and available tools coming with it have shaken up whole industries, with the communications and media sector being particularly affected. At present, the industry is still at an early stage as players like Microsoft and Google try to establish positions on the market while the likes of Anthropic and OpenAI trying to take a piece of the pie in their own right. However, powerful tools being available to consumers at low, even no-, cost could soon be a thing of the past as providers will start to seek to better monetize their technologies. In the meantime, expect tech giants and upstarts to burn through huge piles of cash as they continue to develop and scale their offerings.
Elon Musk's Grok Chatbot for X (formerly Twitter) raises questions about bias and misinformation
State of Play: Elon Musk's AI start-up, xAI, has recently launched its first AI model, Grok. This new integrated chatbot is designed for X and is currently being tested by a small group of users through the Premium+ service. Once the beta testing phase is complete, Grok will be made available to all X Premium subscribers. According to Musk, Grok has been designed to be "anti-woke," which could potentially lead to less censorship of sensitive subjects.
Why it matters: In a bid to boost engagement and revenue, X has recently announced the release of the Grok chatbot. However, some experts have raised concerns about the potential for biases and the spread of false or harmful information through the chatbot. Grok can be seen as another step towards Musk's vision of transforming X into an "everything app”.