On a daily basis, the news headlines keep pouring out stories about the number of cyberattacks that are taking place. In fact, every day, there is a report on how a business or corporation is being hit with thousands, or even millions of Personal Identifiable Information (PII) records compromised and stolen.
In fact, this can be so detrimental that some companies never recover from it. The bottom line is that Security Operation Centres (SOCs) are simply too overburdened and overtaxed to keep up with all of this, let alone having to address future cyberthreats.
One of the possible solutions to this is using automated tools. One prime example of this is known as Artificial Intelligence, or AI for short. The goal of this article is to provide a brief overview into what AI is all about, and its impact upon the SOC.
A definition of Artificial Intelligence
Artificial Intelligence can be specifically defined as follows:
“In the simplest terms, Artificial Intelligence (AI) refers to systems or machines that mimic human intelligence to perform tasks and can iteratively improve themselves based on the information they collect.” (SOURCE: 1)
In other words, the goal of AI is to take the processes and thought patterns as to how the human brain works and apply it to complex situations in which it would take a normal human being hours and even days to process and figure out a possible outcome or conclusion.
The impacts of AI on the Security Operation Centre
At the heart of any cybersecurity strategy for a company is the deployment of the SOC. It is here where the entire cybersecurity threat landscape is monitored on a 24 X 7 X 365 basis, and where any attack vectors can be countered and mitigated.
As sophisticated as it might sound, SOCs are faced with one Herculean problem: The IT security staff that mans this fortress are simply too overworked and fatigued trying to keep track of everything, especially when it comes to filtering and triaging all of the alerts and warnings that come in.
This level of tiredness derives from the lack of cybersecurity workers in the job market today. Because of this, in a recent study conducted by the Ponemon Institute, the average total cost of a security breach has risen from $3.62 to $3.86 million, which represents a 6.4% increase. (SOURCE: 2)
In another study conducted by Imperva, it was discovered that 30% of the IT security teams ignored the bulk of the warnings and alerts that they have received, while 4% of them turned off their notification processes altogether. Even more shocking is that 56% of the IT security teams ignored any type of alert based upon their own experiences of dealing with false positives. (SOURCE: 3)
These findings paint a very disturbing picture for the SOC and display a trend which is now becoming known as “alert fatigue”. This is where the use of AI can probably make its biggest and most positive impact upon an SOC in the following areas:
1) The automation of incident analysis:
For every alert and warning that comes in, at least in theory, the SOC must sort through each one and triage them as well based upon the level of threat that they indicate. For example, an alert may say a potential threat is “high”, “medium”, or “low”. Obviously, the ones that are ranked at a high level should be given first priority. But remember, as alluded to before, because of the thousands of messages coming in everyday, it is almost impossible as well as an enormous investment in time for an IT security team to go through this all the time. This is where AI system can come in. With the intelligence data feeds that are piped into it, it can learn very quickly what alerts and warnings are for real and which ones are not.
2) It can augment the existing staff:
The image is often conjured up that an AI system will replace an entire IT security team, which translates into the fears of instant job loss. However, this is far from the reality. The AI tools that are out there are not yet perfect by any means, and it will be an exceptionally long time until it can completely and 100% mirror the human if it ever will. So, AI can be used in a way that supplements the existing team (especially given the huge cybersecurity labour crunch, as alluded to before). For example, it can be used to efficiently find any common denominators that exist in the plethora of alerts and warnings; and from there, provide alternatives and even suggestions that can be acted on in just a matter of minutes.
3) It can greatly reduce the dwell time:
One of the metrics that most IT security teams do not like to talk about is the dwell time. This metric reflects the time that a threat actor has gained covert and unauthorised access to an IT asset until they are noticed and purged. In fact, according to a study by Mandiant, the average dwell time is 101 days. (SOURCE: 4)
Of course, this is simply too much time, and all sorts of damage can happen in that timeframe. In this aspect, a specially trained AI system can be used to greatly help in the automation of threat hunting exercises, to massively decrease this dwell time period.
Overall, this article has examined in a general sense what Artificial Intelligence is, and the three key areas where it can be used in the Security Operations Centre. But there are other potential uses for AI as well, especially when it comes to:
- The analysis of network traffic on a real time basis, to filter and discard any malicious data packets that may be present;
- The analysis of source code to quickly detect any backdoors that have not been shut down yet;
- Further augment the tools that are used in endpoint security;
- Model end user behaviour to determine any profiles that can be deemed as anomalous or even malicious in nature.