Hoarding or Onboarding: Key Considerations of Security Data Strategy? Part 2

By

This is the second part in a three-part series. You can read part one here.


In real-life situations, I can imagine some of the things you do have no tangible outcome but like most people, me included, you still do them because of the sentimental value associated with them. 

As an example, I have moved over 7 houses in the last 30 years and I have been carrying my graduation books with me all these years. For all I know, they are grossly out of date and are of no use to me anymore. And yet they end up hogging a big space in my cabinet that's sitting in my home office. 

Despite being guilty of this practice myself, I’d never recommend this approach in cybersecurity. You’d inevitably end up in the LASSA Trap (Log All, Store All, SIEM All). The flawed approach of putting the cart in front of the horse, or in security terms, storing everything you can in case you might eventually find a purpose or use for it. It has proven to be a failed approach in cybersecurity and has cost teams dearly. 

At the same time, just adding more data and from more sources without really knowing whether they help in meeting the security objectives, use-cases and compliance requirements has resulted only in an increase in false positives, cost and operational fatigue. This operational fatigue then becomes one of the key reasons behind losing the skilled professionals, which once took a lot of time and effort to be acquired and hired.

It is not the Data that matters but the “Signals that Matter”

Threat detection in the SOC relies heavily on the data. The data that is structured, noise free and clean, and having the right context, to convert it into signals. 

The data or logs that are generated need to be treated first and be made detection-ready (use-case-ready) on the LEFT before it is ingested and used for security use-cases on the RIGHT. This ensures that the limited skilled professionals you have are focused on deriving security value rather than getting overwhelmed with data engineering and management challenges. 

“Data Matters, for the SOC Better”

Currently, SOC is perpetually dealing with unnecessary fires caused by raw data without focusing on what matters: stopping attacks and breaches every second, every day.

This is causing some SOC leaders to go back to the drawing board to start considering a Security Data Strategy (SDS). SDS fits right between the Data Producer (Firewall, EDR, CWP, Windows Logs, etc.)  and the Data Consumer (SIEM, XDR, Data Lake, etc.), so that the data consumer is only using highly treated and high-quality data to deliver on the objectives, use-cases and meeting compliance requirements. 

In this blog, the focus is to elevate the quality of data to convert data to signals, to improve TDIR while under pressure to rein in costs and achieve higher and faster SOC throughput. We will also be discussing how shifting the SOC to the LEFT can save significant time, effort and cost to the RIGHT in operations. 

  • Key considerations for defining your ‘Security Data Strategy’
  • The Enablers of ‘Security Data Strategy’ execution

It is certainly NOT a One Size Fits All Approach and the SDS is governed based on the objective, use-cases and compliance requirements of a business. 

Security Data Strategy

Objective: The current implementations of SOCs are riddled with challenges ranging from:

  • Over running costs, 
  • Overwhelming number of alerts and false positives,
  • Snail-paced security operations 
  • Overhead of data engineering and management, etc.

It is important to define the primary and secondary objectives of building your security data strategy (SDS). Crafting a ‘Security Data Strategy’ is no longer an option. It is  an act of survival. Providing continuous security value to the business has become an imperative for any SOC nowadays. 

For example, your organization is struggling with the ever-increasing license cost of your SIEM as the ingested data volume is increasing and, at the same time, making the SIEM platform sluggish.

In this case, the primary objective is ‘Cost Optimization of SIEM’ and the secondary objective is ‘Increasing the speed of the SIEM platform’.

Use-case: Implementation of use-cases enables you to achieve the objectives you defined in the stage above. The use-cases dictate the quality level of the data and the preprocessing of the data before it is consumed by the SIEM Platform. Each use-case, whether  alert detection, investigation, or threat hunting, requires different data, the period, the speed, and the schema it is required in. 

Taking the same example from above, to reduce the overrunning SIEM cost, noise reduction and de-duplication of data need to be done to reduce the data volume and size. This will reduce the data ingested into the SIEM, aiding in keeping the costs under control. The second measure to further optimize the cost is to send only high-fidelity data and alerts to the SIEM and use non-malicious events or low-fidelity data to the Security Data Lake. This will further enhance the speed of the security operations and reduce the overall licensing cost.  

Compliance: Businesses must be compliant with compliance, industry and regulatory requirements; compliance is and should be a part of every strategy specifically concerning data and security. With global privacy and data localization requirements rising, it is imperative that we consider compliance throughout the strategy. Doing otherwise simply invites trouble. 

The logs come from multiple sources and because of compliance requirements such as GDPR, the logs often need to be pushed into platforms hosted in the EU. Often, the regulations and laws prescribe sensitive data to be treated differently with prescribed controls such as masking, encryption, etc. 

Hence, accounting for  compliance requirements early will reduce non-compliance penalties and overheads associated with becoming compliant later.

Planning for security data strategy is just half the battle won; execution at scale follows next. Oftentimes, the security teams put the policy on paper but lose motivation because of the overhead associated with the implementation. The traditional approach of creating manual rules gives you power and control but comes at the cost of speed and adaptability. And lest we forget, attackers love to break rules. 

In Part 3 of the blog, we will talk about the enablers of the security data strategy at scale with the ‘Security Knowledge Layer’ that is AI-driven, Adaptable and out of the box. 

Picture of Keith Palumbo
Author

Keith Palumbo

CEO, Auguria

SECURE EARLY ACCESS

Are you ready to set a new standard for your SecOps team?

Auguria is inviting interested organizations to apply for early access to the platform. If you’re eager to see Auguria in action, we encourage you to get in touch using the form below.