Skip to main content

Seeing Isn't Believing: How Deepfakes Are Rewiring Organizational Risk

At the State of the Union address last week, President Trump shared the disturbing story of Elliston Berry who, at 14 years old, was victimized by an explicit deepfake video that circulated on social media. Elliston’s story was one of many that inspired the TAKE IT DOWN Act, which was unanimously passed by the Senate in February. President Trump is not the only world leader to sound the alarm about deepfakes – synthetic media so convincing they can fool even the most discerning eye. President Emanuel Macron kicked off France’s Artificial Intelligence Action Summit with a series of AI-generated videos of himself clubbing, singing, rapping and showing off long, luscious hair. From heads of state to our neighbors and friends, this technology has the potential to impact anyone at any time with serious implications. 

Research by the Institute for Strategic Dialogue (ISD) found that people could identify AI-generated content correctly 52 percent of the time, highlighting the challenge of distinguishing deepfakes from authentic media. With reduced moderation at both X and Meta/Facebook, the risk of deepfakes proliferating unchecked is higher than ever.

Beyond mere disinformation, these AI-powered forgeries have opened new frontier for cybercrime, enabling highly targeted phishing schemes and automated social engineering attacks that could pierce even robust security defenses. The financial and reputational consequences for businesses and institutions are profound, as the line between authentic and artificial continues to blur.

The New Face (and Voice) of Financial Fraud

The financial industry stands at the forefront of this crisis, with AI-powered financial fraud surging 700 percent in 2023. Projected losses could reach $40 billion by 2027, challenging the sector’s established fraud defenses.

The democratization of artificial intelligence has empowered malicious actors to create sophisticated deepfakes that can breach financial controls.

HSBC Hong Kong endured the largest verified deepfake fraud to date, losing HK$273 million (US$35 million) in January 2023 after criminals used video conferencing to impersonate executives. The Reserve Bank of India reported similar incidents at three banks between September and December 2023.

Beyond direct fraud, a more insidious threat looms: market manipulation through strategic deepfake deployment. Bad actors could leverage AI-generated content— from fake executive statements to fabricated earnings calls to create artificial market movements. The delay between a deepfake's release and its debunking creates opportunities for illicit arbitrage, particularly concerning as automated trading systems might react before human verification occurs, posing significant risks to market stability.

The threat of deepfake fraud extends beyond financial institutions to corporations across all sectors. In mid-2023, Ferrari NV narrowly avoided a sophisticated scheme when an executive received messages appearing to be from CEO Benedetto Vigna about a supposed acquisition. The messages came from unfamiliar number but featured the CEO’s picture and utilized deepfake voice technology that perfectly matched Vigna’s Italian accent. Only a tiny mechanical intonation made the executive suspicious, leading him to verify the caller’s identity by asking about a book Vigna had recently recommended.

Ferrari NV is no outlier. Binance faced multiple deepfake videos of CEO Changpeng Zhao circulating on social media throughout 2023, though no financial losses were confirmed. The Reserve Bank of India documented three banks experiencing deepfake video incidents between September and December 2023, though specific losses weren't disclosed.

Beyond fraud, organizations’ digital visibility has inadvertently created new attack vectors. Consider Pfizer CEO Albert Bourla’s experience at the World Economic Forum where his call for expanded medicine access was maliciously edited to fuel conspiracy theories and shared on social media.

Educational Institutions Under Siege

As Ellison’s story demonstrates, educational institutions face a particularly insidious threat as deepfake technology enables new forms of harassment and abuse for both students and staff. In 2019, nonconsensual AI-generated pornography was estimated to account for 96 percent of all altered online video content, yet the sheer quantity of deepfake videos has increased 550 percent in the interim.

Schools are particularly vulnerable due to their combination of digital-native students, extensive online presence and responsibility to protect minors. Recent incidents at educational institutions across the country have exposed significant gaps in existing policies and procedures. Many schools lack clear protocols for identifying and responding to AI-generated content, and existing anti-bullying policies often don’t adequately address the unique challenges of deepfake harassment. The speed at which deepfake content can spread through student networks, combined with its potential for severe psychological harm, creates an urgent need for comprehensive institutional responses.

The contrasting responses of two schools illuminate the gravity of this challenge. When Westfield High School in New Jersey discovered students creating sexually explicit deepfakes of classmates, they issued a two-day suspension and minimized the incident in communications. Beverly Vista Middle School in California, conversely, treated similar behavior as a serious offense, involving law enforcement and expelling the perpetrators.

While the perpetrator of Elliston’s deepfake video was disciplined, some have pushed for more severe consequences and faster action. After several months, Elliston’s family needed to appeal to Senator Ted Cruz to intervene and demand that social media companies remove the traumatizing video.

Legal frameworks are still evolving to address these challenges. While some states have enacted legislation specifically addressing deepfake harassment in educational settings, many institutions operate in jurisdictions with no clear guidance. This regulatory patchwork leaves schools struggling to balance student privacy, free speech considerations and their duty of care.

Risks are also elevated for administrators and teachers. The same vulnerabilities that expose corporate leaders to digital manipulation are being exploited to target educators. Consider a chilling recent example involved a Maryland high school principal who was put on administrative leave and received death threats after AI-generated racist recordings mimicked his voice, demonstrating how deepfakes can transform leadership positions into liabilities.

Building Organizational Resilience: The Five Pillars of Deepfake Defense

The threat posed by deepfakes demands immediate and comprehensive organizational response. Traditional security measures—from voice authentication to video verification—can no longer be trusted implicitly.

Organizations should implement five critical steps to build effective deepfake resilience:

First, preparation demands the same rigor as traditional crisis management. Organizations should add a deepfake scenario to crisis response plans and conduct regular tabletop exercises to identify any gaps and build institutional muscle memory. This foundation of preparedness can mean the difference between containing an incident and watching it spiral into a crisis.

Second, implement a robust monitoring program that includes both dark web channels and legitimate platforms. Organizations must establish systematic tracking of potential threats and references to fake content, creating early warning systems that can flag suspicious activity before it gains traction.

Third, detection capabilities must be sharp and swift. Just as IT departments run phishing tests, organizations should now test their resilience against deepfake attacks. Training programs must equip employees to recognize increasingly sophisticated forgeries, while technical systems must be calibrated to identify artificial content quickly.

Fourth, new protection measures should create barriers against manipulation. Organizations must implement robust authentication measures, including digital signatures and watermarking for official audio and video content. These technical safeguards form a critical defense against unauthorized manipulation or misuse of institutional communications.

Lastly, when a deepfake incident occurs, the response must be both agile and strategically calibrated. Just as in any crisis, organizations must quickly assess whether to engage with or ignore specific threats, considering market impact and stakeholder trust. Strategic advisors can prove crucial in these moments, ensuring responses are precisely calibrated and helping to rebuild trust when necessary.

Some of the same organizations that were caught on the back foot internalized these elements in their go forward strategies, significantly improving their ability to quickly counter false narratives and maintain stakeholder trust. HSBC implemented dual authentication protocols for large transfers, while Binance launched blockchain-based video verification and established an internal deepfake detection team. The Hong Kong Monetary Authority emerged as the first regulator to require comprehensive deepfake defense measures in February 2023, mandating new verification protocols and staff training.

In an age where what you see—and hear—can no longer be believed, organizations must fundamentally reimagine trust and verification. The threat transcends individual incidents of fraud, striking at the heart of institutional credibility.  Those who invest in robust detection protocols, crisis response procedures and stakeholder communication strategies today will be better positioned to maintain trust tomorrow. The alternative – waiting until a deepfake crisis strikes – is a risk not worth taking.


To learn more about how FGS Global can help, contact cybertaskforce@fgsglobal.com.


About the Authors

Kelly Langmesser is a Managing Director at FGS Global based in New York City who specializes in cybersecurity preparedness and incident response. Prior to joining the firm, Kelly spent eight years as a spokeswoman for the FBI where she handled communications strategy and media matters for major cyber, criminal and terrorism investigations.

Joshua Gross is a partner at FGS Global based in Washington, DC, where he develops integrated communications campaigns to for governments, corporations and international NGOs. Prior to joining FGS Global, Josh was the Director of Media Relations at the Embassy of Afghanistan in Washington, DC.

Peter Block is a partner at FGS Longview based in Toronto and specializing in crisis communication, with a focus on cyber incidents and critical issues, in addition to his crisis expertise. Before joining FGS Longview, he led corporate communications for Maple Leaf Foods.