AI in Cybersecurity: Should we be excited?

Sanjeev Singh
12 min readApr 16, 2024

--

Banner image showing AI robots manning a security operations centre
This image has been generated using AI

I venture into this article with a disclaimer. The thoughts here are personal and based on my understanding of the situation as it exists at the time of writing this article. Artificial Intelligence (AI) is a rapidly evolving field and many of the aspects discussed here may no longer be relevant in the future.

It’s that time again. Since the latter half of 2023, the cybersecurity ecosystem has rediscovered its newest theme; Artificial Intelligence. What was old is new again.

The last few years belonged to XDR (Extended Detection and Response) and the world was still trying to understand what that actually meant. Every endpoint security vendor had something to offer and often the definition varied based on who you asked. I wrote about it back in 2021 here. Interestingly, at that time I was writing a series named Modern Cyber Defense and had covered seven parts. The next in my plan was on AI in cyber security but for some reasons I held back. Around that time I read this blog on AI and cybersecurity from Eugene Kaspersky and kind of agreed with it. It was very different from the prevalent narrative where AI was expected to solve every problem in cyber security.

The last few years has seen revolutionary changes with Generative AI (Gen AI) and I think I have mustered enough courage now to explore the topic.

Background

Relationship between AI, ML, DL, NLP and LLM

AI is not a new term when it comes to cybersecurity. It has been part of cyber defense for quite a while. As can be seen in the above image, AI is a broad concept that encompasses many subfields, including machine learning (ML), deep learning (DL), and natural language processing (NLP). These fields often overlap and complement each other.

ML is a subset of AI that uses algorithms to build predictive models. This has been the most widely used field used in cyber security, from the early days of User and Entity Behavior Analysis (UEBA) back in 2015 to SIEM, EDRs, XDRs and this technology has fairly stabilized, giving us detections based on behaviors and anomalies at scale. Almost all modern cyber defense tool leverages ML in some form or the other.

In that milieu was unleased ChatGPT in late 2022 and since then, and especially since mid 2023, there has been an endless noise on how AI is going to revolutionize cybersecurity. This is almost déjà vu, like going back a few years and hearing similar stories in the early days of UEBA and XDR. Almost every cybersecurity product vendor offers or has plans to offer a Generative AI based layer on top of their existing product. There is a lot of FUD on how AI, especially in the hands of malicious actors, brings new risks and we need AI to fight back. In this article, we will focus on this aspect.

Does AI pose additional risks for cyber defenders?

Short answer, absolutely. Although, in my view, the right question to ask is whether AI poses any new risk for cyber defenders?

A great resource to understand AI risks is the MITRE ATLAS framework at https://atlas.mitre.org/matrices/ATLAS.

This is an image of the MITRE ATLAS Framework for AI risks
The MITRE ATLAS framework as on 15 Apr 2024

As can be seen in the picture above, almost all of the tactics, except ML Attack Staging, is common with the ATT&CK framework. Even if we look at the techniques, majority is related to attack on LLMs itself and thus would be extremely relevant to developers of LLMs. What about consumers of LLMs? Do they face the same risks as developers? Can they do something about it? Or, what about the industry noise about threat from bad actors leveraging LLMs? Does this fact change, in any appreciable way, the current foundations of cyber defense?

According to me, majority of the risks related to LLMs already existed pre LLMs, but the scale and efficiency for the attackers have gone up. Here are some of the risks as I see it:

  • Attackers becoming more efficient through use of LLMs.
  • Inadvertent data leakage by consumers in public LLMs through queries and data uploads.
  • For developers of LLMs, inadequate security and privacy of LLM tools.
  • Other aspects such as ethics, bias, intellectual property etc., but that is beyond the scope of this article.
  • LLMs themselves expand the attack surface.

Let us examine these risks in some more detail.

Use of LLMs by Bad Actors

This space has the highest FUD in the industry, with vendors pitching their AI enabled solutions to fight bad actors with AI.

Looking at the latest Gartner Hype Cycle for AI, we see that Generative AI is at the peak of hype cycle and the only way forward is disillusionment.

Source: https://www.gartner.com/en/articles/what-s-new-in-artificial-intelligence-from-the-2023-gartner-hype-cycle

More, the latest IBM Threat Intelligence Report 2024, available at https://securityintelligence.com/x-force/2024-x-force-threat-intelligence-index/, has this to say on this subject:

Despite looming gen AI-enabled threats, X-Force hasn’t observed any concrete
evidence of generative AI-engineered cyberattacks to date or a rapid shift in
attackers’ goals and objectives from previous years.

Attackers always want the easy way out. What is the point of doing the difficult thing when defenders still haven’t got the basics right and there are still easy opportunities for attackers.

From the X-Force report mentioned above, the primary attack vectors continue to be credential theft, phishing and exploiting public facing applications. This theme is also seen in other reports such as Verizon DBIR or Mandiant M-Trends report. I fear this will continue for some more time.

Do attackers, leveraging LLMs, have an advantage over those not using them? Possibly. They can potentially craft better phishing mails with higher click rates, or better social engineering or churn out malicious codes faster. As defenders, do we need to change our tactics? Not really, since even with no LLMs, even with lower click rates, the risk of compromise exists. Even a single compromised account can lead to a larger incident due to lateral movement and privilege escalation and we have seen many examples of this in last few years in the real world. It is important to maintain a high maturity of existing controls and focus on post exploitation controls, that can prevent and/or detect post exploitation tactics like lateral movement, persistence, privilege escalation, command and control, etc.

At present, getting the basics rights and ensuring a high maturity of existing controls should suffice. We should worry about esoteric attacks once the foundation has been made solid.

This may very well change in future once the AI becomes more capable and creates new attack vectors, but until such time as the attack vectors remain the same, the defensive controls can also remain the same, although they have to become more agile and faster.

Use of Public LLMs by Users

Enterprise data loss has always been a risk and the advent of large number of public LLMs have created a new avenue for enterprise users to expose sensitive corporate data. While security folks understand the risks, average user may not.

Users pasting sensitive data into chatbots or uploading documents with sensitive data for analysis or insights are a real threat.

Enterprises may already have DLP, CASB. SSE or Proxy solutions, which can be used to control user behavior. However, Gen AI is still relatively new and many solutions may not have the URL category for Gen AI, in the absence of which, it becomes nearly impossible to keep track of the hundreds of public LLMs already available and the many new ones mushrooming every week. So, check with your solution provider if they already support Gen AI as a category in their tool and if not, when are they likely to support it.

Even when the tools support it, we have to realize that Gen AI has its own benefits and does solve some user problems and also helps them with efficiency improvements. So the good questions to ask are:

  • Do we want to block all access to LLMs? This is perhaps the easiest control to implement by blocking the Gen AI category of URLs in SSE/Proxy or equivalent tools as long as they support this category. At the same time, we have to be cognizant of the fact that we may be inhibiting company growth. Whether we like it or not, Gen AI is an enabler and business will definitely want to leverage it for improved efficiencies. If it is a regulated environment that prohibits such use, then it is an easier decision. Else, we should be flexible in our approach.
  • If not block, then what? Take a risk based approach. Maybe users typing their prompt and getting answers is okay, but pasting data/script or uploading documents is not. Maybe we can allow use of public LLMs but block any paste or upload operations using SSE/DLP or something equivalent. This can be taken further. Maybe we allow pasting or uploading data also, but block the same for sensitive data. This requires that the organization has mature data protection controls and has the ability to identify sensitive data.
  • Can we provide an authorized LLM for the enterprise? While most LLMs are public and free for use, paid and enterprise versions have started to emerge from providers such as OpenAI, Google and Microsoft. Microsoft Copilot (formerly Bing Chat Enterprise), available at no additional cost to almost all the M365 and O365 SKUs, is one such example. It uses the well known GPT 4 and DALL-E 3 LLMs. When signed in with the Work or School account, it provides commercial data protection. Commercial data protection means both user and organizational data are protected: Prompts and responses aren’t saved, Microsoft has no eyes-on access, and chat data isn’t used to train the underlying LLMs. This can be a good option for enterprises on M365 or O365 or similar solutions for those on non Microsoft stack.

The last can be a win win strategy, allowing users something that enhances their productivity and provides them new and exciting technology to play with, whilst blocking every other LLM. We still have to train users about risks of hallucinations, IP protection, errors and the risk of uploading sensitive data.

Inadequate Security and Privacy controls for self developed LLMs

This risks pertains to those who develop LLMs or solutions based on LLMs. There are many great resources on the topic already and some of them are linked below:

This blog contains many more frameworks and guidelines.

LLMs as attack surface

In enterprise environment, as against consumer solutions, LLMs are designed to solve some problem based on the enterprise data, which means users ask questions, or generate insights based on their corporate data. Invariably, such LLMs will have multiple integrations with inputs, data sources, data lakes, and other business solutions.

Attackers are known to be creative and use existing capabilities for defense evasion. Living off the Land Attacks are well known. Can there be a Living off the LLM attack in future? Access control and authorization for the entire stack, the interfaces and the APIs will be key.

As organizations accelerate the deployment of LLMs and integrate them with organizational data, cyber defenders will have to elevate themselves and work out strategies to prevent misuse or abuse of this new attack surface.

Security use cases for LLMs

If you have read so far, you must be wondering; are there any benefits of LLMs in cyber security? The short answer is Yes, but much lesser than the noise that is currently enveloping this industry. Let us examine some of the use cases that can actually benefit cyber defenders:

  • SOC/SIEM/XDR Augmentation. Traditionally, SOC teams have struggled with quality and speed of triage and LLMs can potentially speed this up to some extent. Their ability to churn through vast amounts of data, contextualize and enrich an alert, determine the severity and risk and recommend suggested actions can all be beneficial. The ability of SOC Analysts asking questions in simple language, with the LLM converting them to complex queries, fetching the data and presenting it in an understandable form can go a long way in improving the life of SOC Analysts. The cyber security industry seems to be headed this way with launch of solutions like Microsoft Copilot for Security and various EDR/XDR/CNAPP and myriad other products launching their own AI chatbot.
  • While the capability looks promising, there are dangers as well. Without adequate grounding, LLMs are prone to hallucinations and a SOC Analyst, who themselves may not be very capable, would tend to trust the output of LLM. What if it is wrong? While there is a rush to embed LLMs in every product, as consumers we must be aware that the benefits in standalone products would be minimal and the real benefits will only come from solutions that have access to large data lakes like SIEM or XDR. If the solution landscape is fragmented, you may end up with multiple LLMs embedded in each solution and we are back to square one, with the human having to correlate amongst them all. Thus, in my view, LLMs would be potential enablers only in solutions that already have large data sets available or have the capability to integrate with typical enrichment sources such as IP lookup, Threat Intel lookup, etc.
  • Potential Threat Hunting. This could be the biggest use case, although I am yet to see any model capable to do threat hunting. Threat hunting is a complex activity and I wrote about it here. A model that can assist the SOC Analyst in hunting by converting natural language queries to complex SIEM/XDR query language will go a long way in reducing fatigue and make life easy.
  • SecOps Automation. In a way, LLMs are like advanced version of SOAR playbooks, where they can contextualize an alert and undertake or recommend certain remedial actions. The ability to generate customized playbooks, on demand, for every alert is something that can really enhance productivity. However, given their penchant for hallucinations, it will be a brave organization that will allow LLMs to take response actions. Human in the loop will still be required for this use case.
  • Report Generation. LLMs can be used to generate reports or summarize large volumes of security data. This can be a huge time saver for humans.
  • Talent Management. Use of LLMs can enhance productivity for human analysts allowing them to do more, thus reducing their burden and easing the pressure. This can lead to motivated workforce.
  • As the technology matures, I am sure there would be many more use cases identified.

AI vs AI: Hype or Reality?

The world is full of noise about how AI is the new danger and we need AI to fight back. The good news is that cyber defenders have already been using AI (ML actually) for nearly a decade and so we should have the upper hand, right?

As was mentioned earlier in the article, quoting the IBM Threat report, we have not yet seen AI being leveraged massively by bad actors. Maybe it will happen in future and the same IBM report predicts:

Based on the analysis, X-Force predicts threat actors will begin to target AI broadly once the market coalesces around common deployment models and a small number of vendors. This analysis suggests that AI market dominance is the milestone that will trigger attacker investment in attack toolkits targeting AI.

At present, AI enables bad actors to get more efficient and agile, but the attack vectors remain similar. If attack vectors are not changing much, do we, as defenders, need major transformation in our approach? I would argue that getting the basics right, improving foundational controls, focus on pre and post exploitation and having the capability to rapidly detect or prevent attacks will still work for the near future. Do we really need AI (specifically Gen AI) for this? Maybe not. Our existing AI (ML) should suffice for some more time.

Conclusion

The pace of evolution in the field of Gen AI has been swift and remarkable. Since the launch of ChatGPT, there have been several iterations of Gen AI technology released at an impressive rate. Who can imagine that it has just been about 500 days since ChatGPT was initially launched in November 2022 and we already see many multi modal LLMs with varying capabilities.

While the cyber security world was still struggling with defining XDR, we have a new vector to consider. At present, threats may be manageable using existing processes and technologies but the future is definitely going to be different. I expect to see AI based autonomous attacks in the next few years with improved defense evasion and that will be a real challenge.

The military world is already seeing AI enabled drones and drone vs robot fights. In a historical first, a Russian ground robot was destroyed by a Ukrainian FPV drone recently. The cyber world cannot be far behind. See the video below:

I am excited to see how this technology with interplay with other developments such as Artificial General Intelligence (AGI) and Quantum computing in future and how they will share future cyber defense. I do not envy future cyber defenders.

--

--

Sanjeev Singh
Sanjeev Singh

Written by Sanjeev Singh

An avid learner and passionate cyber defender

No responses yet