Back

Reducing Alert False Positives with Risk-Based Use Case Mapping

banner-abstract-blog-cube-3

Making sense of an overwhelming amount of data is one of the biggest challenges of any Security Operations Center (SOC). It’s complicated to digest millions of different appliance logs and events – and to find the observables that matter, thereby detecting real incidents. It is especially hard to align findings within the cyber security kill chain, which is used by possible adversaries while attacking organizational infrastructure. Cybersecurity professionals frequently refer to the process of connecting all of these dots, “Finding the needle in the haystack.”

False alerts generated in the SOC are an ongoing reality for SOC analysts, adding stress to what is an inherently challenging work environment. The question is how to reduce the number of false positives to improve the efficiency of the SOC.

At CyberProof, we find that the number of false alerts can be minimized by leveraging risk-based mapping of use cases, which can be used to improve an organization’s security posture and reduce mean time to response.

At CyberProof, we find that the number of false alerts can be minimized by leveraging risk-based mapping of use cases.

Reducing Alert False Positives with Risk-Based Use Case Mapping  - •	Correlation of Events

Security Platform Management that Supports Effective Data Collection

Implementing risked-based mapping starts with data collection. Every security or infrastructure appliance gives you different key value pairs of information, and to make sense of this data requires normalization using a common information model in your Security Information and Event Management (SIEM). For security events, this could be, for example:

  • Dest - the destination of an event that is happening; this can be an Endpoint Name, IP Address, website, server, etc.
  • Source - the source where the event is coming or generated from; this can be an Endpoint Name, IP Address, website, server, etc.
  • MessageID - further information about the event; for example, in e-mail logs this is the Subject
  • User - the user under whose name this event is triggered); this can be given by the active directory, the system, a program, or a process.
  • Signature - can be a malicious indicator from a security appliance; this might include a malware name, protocol, domain, or rule name.

Implementing risked-based mapping starts with data collection

Implementing Detection Rules in the SIEM

Utilizing normalized data in a risk-based mapping first requires implementing detection rules in your Security Information Event Management (SIEM).

At CyberProof, for example, we use a Use Case Factory for this process. The Factory can be imagined like the brain of a very efficient “Use Case Machine” - providing optimized and tuned rules, that trigger an alert only when something significant happens.

Our Use Case Factory can be imagined like the brain of a very efficient “Use Case Machine” - providing optimized and tuned rules, that trigger an alert only when something significant happens.

The challenge is connecting different alerts to a single source of an incident. This is achieved through correlation and risk-based mapping:

Reducing Alert False Positives with Risk-Based Use Case Mapping

Risked base mapping is achieved as follows:

  1. Give risk_scores for notable risk_objects. For example: A user such as an admin account or VIP in the company (such as the CEO) has a higher risk score then a normal desktop user in the field. A system such as a server hosting the Active Directory or a curtail Web Application has a higher risk score then a normal desktop Windows 10 endpoint.
  2. The base_score of each detection rule is given based on the severity of the findings in cases where the rule logic triggers. For example, rule creation of a suspicious process should have a higher score then a signature-based malware finding. Both are notable but the impact is different.
  3. Add the risk_score of the risk_object and then multiply by the base_score. The mapped alerts can then be grouped by an attack vector which is aligned with the MITRE ATT&CK framework. If the combination of alerts occurs in a certain timeframe, for example within 48 hours on a similar user or system, an Incident is created.
    diagram_Correlation_of_Events_-_2

Automation, Orchestration and Enrichment Improve Response Time

The efficiency of the SOC is improved when incidents and alerts are automatically enriched in a common orchestration platform. Additional information around observables from threat intelligence feeds, for example, can be orchestrated in a detailed playbook that provides SOC analysts with maximal possible insights for analyzing and responding to cyber threats.

These innovations ensure that SOC analysts are not overwhelmed with meaningless information, which takes time to investigate - and to pull together additional Information from stakeholders. Automation and enrichment help to reduce alert fatigue and drastically improve the mean time to respond to cyber security threats.

Interested in learning more about how to improve the cybersecurity posture of your organization? Contact us!

SOC Masterclass CTA banner_2

 

Our newsletter is only one click away!

Topics