top of page
Debra

Defending Against Generative AI Cyber Threats

Generative AI has been getting a lot of attention lately. ChatGPT, Dall-E, Vall-E, and other natural language processing (NLP) AI models have taken the ease of use and accuracy of artificial intelligence to a new level and unleashed it on the general public. While there are a myriad of potential benefits and benign uses for the technology, there are also many concerns—including that it can be used to develop malicious exploits and more effective cyberattacks. The real question, though, is, “What does that mean for cybersecurity and how can you defend against generative AI cyberattacks?”


Nefarious Uses for Generative AI

Generative AI tools have the potential to change the way cyber threats are developed and executed. With the ability to generate human-like text and speech, these models can be used to automate the creation of phishing emails, social engineering attacks, and other types of malicious content.


If you phrase the request cleverly enough, you can also get generative AI like ChatGPT to literally write exploits and malicious code. Threat actors can also automate the development of new attack methods. For example, a generative AI model trained on a dataset of known vulnerabilities could be used to automatically generate new exploit code that can be used to target those vulnerabilities. However, this is not a new concept and has been done before with other techniques, such as fuzzing, that can also automate exploit development.


One potential impact of generative AI on cybersecurity is the ability for threat actors to quickly develop more sophisticated and convincing attacks. For example, a generative AI model trained on a large dataset of phishing emails could be used to automatically generate new, highly convincing phishing emails that are more difficult to detect. Additionally, generative AI models can be used to create realistic-sounding speech for use in phone-based social engineering attacks. Vall-E can match the voice and mannerisms of someone almost perfectly based on just three seconds of audio recording of their voice.


Matt Duench, Senior Director of Product Marketing at Okta, stressed, “AI has proven to be very capable of rendering human-like copy and live dialog via chat. In the past, phishing campaigns have been thwarted by looking for poor grammar, spelling, or general anomalies you wouldn’t expect to see from a native speaker. As AI enables advanced phishing emails and chatbots to exist with a higher degree of realism, it’s even more important that we embrace phishing-resistant factors, like passkeys.”


For what it’s worth, I should stress that generative AI models are not inherently malicious and can be used for beneficial purposes as well. For example, generative AI models can be used to automatically generate new security controls or to identify and prioritise vulnerabilities for remediation.


However, Duench urges caution with relying on code created with generative AI. “Generative AI systems are trained by looking at existing examples of code. Trusting that the AI will generate code to the specification of the request does not mean the code has been generated to incorporate the best libraries, considered supply chain risks, or has access to all of the close-source tools used to scan for vulnerabilities. They can often lack the cybersecurity context of how that code functions within a company’s internal environment and source code.”


Detecting Generative AI Cyberattacks

You can’t. At least, not easily or accurately.


It’s important to note that there is no viable way to accurately tell whether an attack was developed by generative AI or not. The ultimate goal of the generative AI model is to be indistinguishable from the response or content a human would create.


“Generative AI projects like ChatGPT and other advancements in image creation, voice mimicry, and video alteration create a unique challenge from a cybersecurity perspective,” explained Rob Bathurst, Co-Founder and Chief Technology Officer of Epiphany Systems. “But in the hands of an attacker they're essentially being used to target the same thing—a person through social engineering.”


The good news is, you don’t have to. It’s irrelevant whether an attack was developed using generative AI or not. An exploit is an exploit, and an attack is an attack, regardless of how it was created.


“Sophisticated Nation-State Adversaries”

Trying to figure out if an exploit or cyberattack was created by generative AI is like trying to determine if an exploit or cyberattack originated from a nation-state adversary. Determining the specific threat actor, their motives, and ultimate objectives may be important for improving defenses against future attacks, but it is not an excuse for failing to stop an attack in the first place.


Many organisations like to deflect blame by claiming that breaches and attacks were the result of "sophisticated nation-state adversaries," and use this as a justification for their failure to prevent the attack. However, the job of cybersecurity is to prevent and respond to attacks regardless of where they came from.


Security teams can’t simply shrug their shoulders and concede defeat just because an attack might come from nation-state adversary or generative AI as opposed to a run-of-the-mill human cybercriminal.


Effective Exposure Management

Generative AI is very cool and has significant implications—both good and bad— for cybersecurity. It lowers the barrier for entry by enabling people with no coding skills or knowledge of exploits to develop cyberattacks, and it can be used to automate and accelerate the creation of malicious content.


Bathurst noted, “While there are concerns about its ability to generate malicious code, there are many tools out there already that can assist someone in natural language-based code generation like GitHub Copilot. When we remember that this is a change in technique and not a change in the vector, we can essentially revert back to the fundamentals of how we have always limited exposure to social engineering or business email compromise. The key to being resilient now and in the future is about recognising that people are not the weak link in a business, they are its power. Our job in cybersecurity is to surround them with fail-safes to protect both them and the business by limiting unnecessary risk before a compromise."


In other words, how the threat was developed or a spike in the volume of threats does not fundamentally change anything if you are doing cybersecurity the right way. The same principles of effective cyber defence, such as continuous threat exposure management (CTEM), still apply. By proactively identifying and mitigating attack paths that can lead to material impact, organisations can effectively protect themselves from cyber threats, regardless of whether they are developed using generative AI or not.

The capabilities of generative AI and the precision of the output from generative AI models is impressive and it will continue to advance and improve. Don’t get me wrong, it certainly has the potential to change the way cyber threats are developed and executed. But effective cybersecurity does not change based on the source or motive of the attack.



9 views0 comments

Comments


bottom of page