HERE’S AN unfortunate reality: preventative systems will fail. They will not fail all of the time. Not even most of the time, but they will fail eventually, and perhaps when it matters most. That being so, a robust detection system needs to be in place. As Connor Morley duly observes, there’s no better detection system right now than a dedicated and efficient threat hunting team.
The time it takes for organisations to detect a security breach has decreased dramatically in the last five years. That’s according to most public reports, at least. If you do happen to hear a story about hackers lingering inside a corporate network for weeks, months or even a year or more, that’s probably because the host organisation wasn’t using any threat hunters.
A top class threat hunting team can reduce the time it takes to detect a cyber breach down to hours or even minutes. Threat hunting is the capacity to actively engage in defending estates and networks through a constantly evolving approach towards understanding – and, therefore, mitigating – offensive capabilities. Orchestrated effectively, it serves to fill the gap in the industry that currently exists between offensive and defensive capabilities.
What does filling this gap look like in the real world, though? Think about vulnerability management. Offensive teams have tools like vulnerability scanners, which are automated. They also have the manual approach of penetration testers and ‘red teamers’. Both actively search for vulnerabilities on a given company’s network, but the manual approach can obviously find things that are brand new or otherwise unique to an individual estate.
Defensive capabilities, meanwhile, primarily rely on automated systems of the sort you would readily find in a Security Operations Centre. These tools are signature-based, devised by research published to production systems and incorporated into the alert systems actively used by assessors.
Threat hunting is basically the counterpoint of penetration testers and ‘red teamers’ in that it’s the manual approach towards defensive actions. Threat hunters don’t wait to be told what’s bad. They don’t wait for a machine to notice something awry has happened. Instead, they teach machines to be aware that something bad has occurred, actively and manually adjusting for particular compromises or client concerns based on specific needs.
This can be done by either detecting through standard rule setting and tool sets that detect malicious behavior or by vetting for very specific threat actions (often referenced as ‘hunt sprints’ or use cases). Threat hunters devise hunt sprints specifically for Indicators of Compromise and Indicators of Attack based on new/emerging techniques and exploits, as well as actions that may have been encountered during an active incident or have been reported in the wild. They can then conduct a sweep across all of their clients’ estates in order to find anything that appears to be a compromise in accordance with the hunt sprints that have been devised.
Those findings are then incorporated in the threat hunters’ technological stack to create detection capabilities that identify when any attacker tries to use the same technique. This process creates legacy and future protection for all of a threat hunting team’s clients.
Mind of the attacker
Threat hunting is more than just a job or a role. Rather, it’s an extensive mindset with a distinctive capability. This art and science brings together lots of different elements of what many security teams already have in place, but supplements them with proactive engagements that increase the ability to defend networks.
Most importantly, it allows defenders to tailor their efforts to a company’s specific infrastructures and business needs. Absolutely fundamental to this approach is nurturing the ability to think like an attacker.
Many threat hunters start with Offensive Security Certified Professional training as a baseline to give them a foundational understanding of offensive security. Truly developing an attacker’s mindset, though, requires the ability to ask the same questions actual criminals do: Is this application something that can be misused? How can I misuse it? Can I make something do something that it’s not supposed to?
The more understanding threat hunters have on TTPs, kill chains, standard procedures, exfiltration, targets and data theft, the better they can hypothesise about what an attacker could try to do next on a network. Typically, this involves answering questions like: How are they going to gain access to the network? What persistence mechanisms are they using? What exfiltration methods are they adopting? How are they actually moving through the network?
This is why research is fundamental. Research is how threat hunters advance the understanding of how attackers work, how they stay on the front lines, how they stay on their toes, how they advance analytical systems and how they devise new offensive capabilities.
Ideally, each individual in a threat hunting team has a different area of interest, which they’re able to pursue at their leisure, basically to research what they find interesting. This then leads to a huge range of expertise in lots of different sectors that’s directly infused into the team’s abilities and, indeed, the defensive capability of any company that they’re tasked to protect and safeguard.
Threat hunting isn’t for everyone. For small businesses, threat hunting capabilities may be overkill. For bigger corporations with Intellectual Property or large databases, threat hunting is more or less a necessity.
It’s most useful for those operations likely to be targeted by active attackers. The riper the target, the more relevant threat hunting becomes.
Since the approach barely existed five years ago and the term ‘threat hunters’ invokes images of comic book heroes united to defeat a collective of ‘super villains’, it makes sense that some misconceptions about the field have emerged. One popular myth is that threat hunters sift through all the data that comes from every machine across every client’s estate. That’s impossible. A single machine can generate thousands – even millions – of logs per day.
Then there are servers with all of their connections, application handling and similar activity. Instead of manually checking everything, threat hunters focus on constantly adapting detection systems to avoid going through data by hand. However, doing so requires the continual development of manual detection capabilities from raw data. That being so, automated and manual detection go hand-in-hand.
Another misguided belief imagines threat hunters are constantly chasing detected attackers out of an estate. Threat hunters don’t engage in hand-to-hand ‘combat’ in a network. Rather, they use response capabilities to hinder an attacker’s capabilities and frustrate their activity until a fully-fledged remediation solution can be devised. They track what the attacker’s doing, how the attack is being transacted and focus on its goal.
Based on all of that information, well-designed frustrations – like ‘bottlenecking’ network speed or isolating particular command protocols – afford internal or external incident response teams the time to kick the attacker out of the network and prevent them from coming back.
Dealing with a hands-on keyboard attacker is always a dangerous situation for multiple reasons, which can change on a case-by-case basis. One reason is that an attacker who recognises a threat hunting team’s presence may suddenly change TTPs and the way in which they’re attacking the system (something they would rarely do if their tactics are working, of course). This can easily hinder a hunt team’s ability to track an attacker across the estate.
Another example is that, if detected, some attackers will, to coin a phrase, ‘go nuclear’. They’ll cause as much damage as possible in the shortest amount of time, and particularly so if they’re aiming to damage the organisation in the first place. They trigger everything they can all at once.
One last misconception is that threat hunters will detect instantly when an attacker gains a foothold in the estate. That’s not normally the case. Estates can be so vast that a simple foothold – and notably if it’s tied to a new vulnerability – can be very hard to detect. However, threat hunters will detect malicious activity that’s typical of attack operations by their TTPs for data theft, exfiltration, persistence, pivoting through a network or memory injection, for instance.
The use of any ‘bread-and-butter’-style attacker technique allows threat hunters to find malicious activity pretty easily. However, a hunt team can also investigate abnormal behaviour, which can then highlight new/adapted techniques used by attackers. That will then allow the team to trace back to how the attacker(s) accessed the network, find that vulnerability and then move forward with remediation.
Zero trust models
Much like the security industry in general, threat hunting is moving towards tighter security frameworks such as zero trust models. A key reason for this is that one of the main factors of compromise nowadays is internal threats, whereby an employee of a company is used as the point of attack (or is indeed the attacker). Given this kind of attacker already has access to the system and first-hand knowledge of how systems work, that makes it much harder to detect the attack and associate it with a particular individual.
The zero trust model starts with the premise that all action is deemed as untrustworthy and, therefore, needs authorisation and categorisation. This means that there can be no action on an estate that isn’t associated or categorised to a particular person or a deliberate activity. If the action cannot be categorised, it follows that it can then readily be detected.
The cyber security industry is also moving away from blacklists, which keep growing longer and longer, because there are now so many ways in which an attacker can navigate situations. Unruly blacklists are prompting the move toward whitelists. This means software that’s run on the network, or on an estate, must be authenticated by the client’s security teams before it can be used. Put simply, anything that deviates from the allowed procedures or technology will be banned.
Threat hunters have to adapt in order to maintain cutting-edge detection for methodologies that may try to bypass zero trust models or whitelists. The top teams will find ways that can leverage these evolutions to their advantage.
It’s all about people
At its heart, a threat hunting team is all about people. Finding the best people is the key to being able to keep up with (and stay ahead of) the attackers. Ideal team members need to be keen. They need to be willing to step up and have the technical know-how that allows them to facilitate and transact the role.
Although developing these capabilities in-house requires an abundance of training, an internal threat hunting team is ideal for any corporation continually dealing with sensitive materials.
For companies thinking of hiring a threat hunting team, finding one that can absolutely be trusted to do the job well is essential. This team has to be one you want operating on the front line of any incident and ready to handle it from start to finish, until such time that the attacker is banished from the estate.
When preventative systems fail, it’s fair to suggest that the best defence is a bunch of focused minds thinking very deeply about what comes next.
Connor Morley is Senior Threat Hunter at F-Secure (www.f-secure.com)