Skip to main content
Global (EN)
Global (EN)中文FrancaisعربيDeutsch日本語

Generative AI: A New Frontier for Cyber Risk?

img

In the latest FGS Global webinar, Jenny Davey, Partner and co-lead of the UK crisis communications team, sits down with a panel of experts on cybersecurity to discuss the challenges faced by corporates with generative AI transforming cyber risk and the potential counter measures.

Increasing attention is being paid to the cybersecurity risks posed to companies by generative AI, requiring business leaders to constantly evolve with the changing landscape and hence prepare, recognise and respond accordingly. To shed light on these issues, FGS Global’s co-lead of crisis communications Jenny Davey spoke with a panel of three leading experts on the topic: Jamie Smith, head of cyber security S-RM, Nina Lazic, a Partner and cyber security specialist at Osborne Clarke, and Charles O’Brien, Partner and co-lead of FGS Global’s crisis practice.

Jamie Smith led the discussion by highlighting how generative AI has the potential to turbocharge the threat environment – giving malicious actors access to move sophisticated and novel tools giving to cyber criminals. Firstly, attackers are able to mimic human behaviours and craft spear phishing emails at massive scale. Using tools based on generative AI, criminals can now automatically generate more customized, context-specific phishing messages tailored to their targets, which removes the need for time-intensive manual crafting of messages. ’Man in the middle’ attacks, where the attacker can sit on a system or in an inbox for months waiting for the best moment to strike, have become even more of a threat.

Secondly, while Smith highlights that the fundamental nature of attacks is not changing, he cautions that generative AI could democratize the ability to write malicious code. Whereas the ability to create malware was previously the reserve of skilled coders, AI text generation today allows anyone with a keyboard to generate new malware without need for coding experience. Smith pointed to the recent emergence of "WormGPT", a malign clone of ChatGPT advertised on dark web forums designed specifically to write phishing emails and code malicious programs.

Addressing the regulatory backdrop for corporates, Nina Lazic points out that while companies are expected to put in place preventative measures to safeguard personal data, these measures must adjust according to the evolving threats posed by cybercrime. With the bar for addressing cyber risks already high and only set to rise to reflect emerging threats, Lazic argues that companies must remain agile and adapt their protective measures to ensure that personal data remains safeguarded.

Both Smith and Lazic point to the importance of employee education as one of these necessary preventative measures. Even the most secure technological defence measures can be compromised in an instant by human error: Verizon confirmed this in its annual data breach investigations report last year, which found that as many as 82% of data breaches involved human error in some way. To account for this, the panellists suggest that companies need not only to educate their employees of the emerging risks but to go so far as to instil a culture of hypervigilance.

In the context of communications, FGS’s Charles O’Brien brings these arguments to actionable advice for corporates, recommending that corporates roll out cyber communications playbooks that include messaging about how a company treats its proprietary data, where it is held and how it is managed to effectively respond in a time of crisis. This all has to be underscored by a sense of doing the basics, those ‘hard yards’, really well and doubling down on the traditional aspects of preparation and education. Practice is the key mantra - there is no substitute for regular drilling in the form of war-gaming simulations for senior leadership and the key members of the crisis response committee.

The growing cyber-threats posed by generative AI clearly require companies to prepare, to recognise and to respond with increasing precision. As stressed by these leading voices on the subject, failing to do so risks irreparable consequences.