Skip to main content
Global (EN)
Global (EN)中文FrancaisعربيDeutsch日本語

Protecting Corporate Reputation in the era of the deepfake


Cyber risk is now chronic. As the threat environment for corporates continues to evolve at a searing pace, managing this issue has never been more critical for leadership teams. AI has the potential to turbocharge this dynamic, with 2023 witnessing a 72% increase in global data breaches on the prior year.

The emergence of highly sophisticated deepfake attacks exemplifies the future risks posed by AI, gaining attention across political, business and societal spheres. Deepfakes are now everywhere, having been cast into the spotlight by the recent targeting of high-profile figures including Joe Biden, Keir Starmer, Taylor Swift and Martin Lewis.

The corporate sector is particularly vulnerable, where the potential for misrepresentation can damage trust, business continuity, and market stability. One striking example is the $25 million lost by a Hong Kong company due to a deepfake scam, highlighting the need for robust communication strategies and internal safeguards.

Such incidents demonstrate the potency of deepfake technology to be used in highly advanced attacks. Now a person’s likeness can be recreated with limited access to videos of the target, and their voice easily cloned with only seconds of audio – making this activity hard to detect. Masks can be created with photos from social media that can penetrate a system protected by face ID, and this is just the beginning.

The rapid advancement of such technology raises concerns about it outpacing legal frameworks, leaving companies and political entities struggling to keep up. Although recognised by regulators, tangible countermeasures are lacking.

In the UK, the absence of robust online disinformation laws has drawn criticism, and while the Online Safety Act addresses harmful online content, it falls short on combating disinformation. Emily Cox, Partner at Pinsent Masons and data privacy expert, underlines that in this market, “the government [has] confirmed that we should not expect additional AI regulation in the short term as it is adopting a light touch and pro-innovation approach for now.”

Cox points out that in its response to the consultation on the Online Safety Act, the government “acknowledges that regulation will be required in the future but believes it is too early to anticipate issues and apportion liability given the fast-evolving landscape.” For the time being, UK corporates have to rely on what Cox terms a “patchwork of legislation” which may or may not assist depending on the specifics of the deepfake in question.

Social media platforms, pivotal to the spread of harmful deepfakes, have also been criticised for their slow response. Ripple CEO, Brad Garlinghouse, publicly called out YouTube for its reaction to a deepfake scam using his likeness. Meta and X have received similar criticism, despite the companies’ claims they are working hard to address concerns.

Highlighting this lack of clarity around the responsibility for handling deepfakes, Wasim Khaled, CEO at Blackbird.AI, observes: “A collective and proactive approach is needed. Boards must make reputational risk from disinformation a strategic priority. Proactive preparation and education around deep fakes and narrative attacks are key.” Khaled adds that against this backdrop, “contemporary cyber readiness must include planning for deepfakes and narrative attack campaigns”.

Cyber risk, as exemplified by deepfake attacks is a dynamic problem, requiring a dynamic solution. How can businesses better prepare to combat this threat?

1. Deepfake defence training: Empower employees to spot fakes

The advent of hybrid working and a lack of organisational cyber awareness continues to hamper the efforts of corporates in tackling these unique challenges. Internal awareness and education remain key as a first line of defence. Organisations that have a mature conversation about digital risk led by management will be better protected than those who are just saying ‘let IT handle it’. Employees should be trained in how to recognise deepfake content, better positioning the company to avoid innocent mistakes leading to significant business disruption.

2. Rapid response is key: Tackle deepfakes with speed

To mitigate the escalation of the deepfake, companies should respond quickly and proactively. As Cox says, “the first port of call for an affected company or individual should be to contact the platform on which the deepfake is hosted, and report both the content and the accounts sharing the content via the reporting tool.” Identifying where the deepfake has been published, companies should contact social media platforms promptly due to ensure swift take-down, issuing alerts from verified channels to help address any confusion over the authenticity of the audio or video clip.

3. Digital detectives: Partner with Forensic Experts

Engaging with digital forensics experts who specialise in deepfake detection and investigation should be done as soon as possible. They will have access to detection tools and software which otherwise may not be accessible to the affected company. They may also be able to trace the source of the deepfake, allowing the company to directly identify the threat actor. Wasim Khaled highlights that “with AI making it increasingly easy to fabricate believable fake content, companies need reliable partners to help them identify them as they scale and become harmful, to facilitate more informed and effective strategic decision making in a time of crisis.”

4. Crisis blueprint: craft a communications response plan

Most importantly, the key principles of preparation and planning remain. Having a highly integrated coherent incident response plan that knits all the elements and respective teams together is essential, along with identifying key lines of internal and external communication in the case of a deepfake crisis. In doing so, corporates will feel better-prepared and in the event of an incident, help maintain trust with stakeholders. Ensuring you have a plan for how to communicate and inform stakeholders in the event of a deepfake attack is essential, and as Martin Lewis – the renowned personal finance commentator – demonstrated, the old methods can still be key to managing these moments in the new information age. Ultimately, his own social media feeds and the traditional media were the most trusted platforms to get his message out.

Understand how AI is affecting your organisation and how you can adapt