Every large organization allocates a budget to run a Security Operations Center (SOC) or similar cybersecurity defense department. Smaller organizations tend to use a managed service bureau.
Either way, someone takes responsibility for the day-to-day cybersecurity defense of the organization and ensures it’s adequately protected against incidents and breaches. And the risk of incidents and data continually increases.
SOC teams (or their equivalents) spend their days monitoring network traffic and responding to any cyberattacks they detect. Unfortunately, it’s always on every organization’s mind to do more with less.
Improving the SOC function while keeping costs under control is something everyone is thinking about, but it can be difficult to find the time and energy to transform the function when it’s always in firefighting mode within budget constraints.
Generative AI can help. Many cybersecurity vendors are adding AI to their products to filter and correlate alerts, identify priorities for the SOC team to focus on, and suggest (or automatically invoke) remediation actions.
But as many observers of generative AI have correctly pointed out, the quality of a generative AI solution is only as good as the quality of the data available. While hallucinations and incorrect results are inevitable, if you improve the data you improve the results.
Effectively curating cybersecurity data is the single largest problem area to address for successfully using AI to improve SOC productivity and efficiency.
A good approach to improve the situation is to create a knowledge layer between the alerting and monitoring systems and the analysts who are deciding which issues to remediate and how to remediate them.
The idea is to create an environment in which the SOC team is leveraging the advanced intelligence available through the curated knowledge layer and skilled application of AI. This can change the game entirely.
The important question is what knowledge helps you complete your task? What part of cognitive load (i.e. the thinking process) can be effectively offloaded to generative AI?
There’s an old adage that data is not information. To be useful, raw data must be transformed or distilled into significant information. Once you improve the quality of your data, the AI system can then transform the raw data into useful information.
When you are monitoring security alerts, you want to find a way to help staff handle the overwhelming amount of data that results from today’s active cyber criminal world, which is, of course, exacerbated by AI.
You want to help them eliminate as much noise as possible, concentrate quickly on what needs to be done, and accelerate incident response.
What you want is to intelligently process, correlate, and prioritize these incidents and alerts, helping staff more effectively leverage their SIEM/XDR tools. Generative AI should be assisting you with the research, analysis, and rapid synthesis of data into information and understanding that leads to defensive action.
Conversely, feeding poor-quality data into an LLM incurs a latency cost, with a multiplier in cloud resource costs and AI token costs to improve results through trial and error. It may take several costly iterations to know whether the AI tool is returning the right results. You want to avoid a scenario in which the AI creates an issue or provides wrong information, leading to an incident.
The Security Knowledge Layer (SKL) sits between the incident alert and monitoring tools and security analysts. Security analysts leverage the SKL to determine what needs priority attention and what action, if any, to take to remediate an incident or breach.
Before generative AI, such a knowledge layer existed primarily in the brains of trained staff or “on paper” in documents and playbooks that take time to read and digest and are often out of date.
The AI-based knowledge layer sits between cybersecurity monitoring data sources (such as SIEMs, XDRs, and data lakes) and analysts, potentially processing petabytes of security data in real time to help them make sense of the ever-increasing flood of data and turn it into actionable insights.
Auguria has developed an effective SKL, using vector embeddings and machine learning to optimize critical security events for better human-machine teaming, enabling a more efficient and cost-effective security operations function.
Auguria’s platform works with any existing SIEM, XDR, or other monitoring infrastructure an organization may already have in place, reducing costs by filtering noise and efficiently storing data.
The SKL identifies for cybersecurity analysts the most critical elements of monitoring data, filtering out noise, and summarizing the aggregates.
It also ranks, correlates, and enriches events, using generative AI to detect events and support analyst decisions, reducing staff time and improving productivity. The SKL functions also include AI-powered incident triage, threat hunting, and root cause analysis.
Auguria’s SKL ingests data at a petabyte-per-hour scale and is integrated with more than 350 security monitoring products. Data is normalized as it is ingested into the Open Cybersecurity Schema Framework (OCSF). The system identifies redundancies, duplicates, and compacts the data to summarize aggregates.
Security operations centers (SOC) and security analysts have worked in relative isolation for years, hunkering down over endless logs, alerts, and incident reports.
As usual, it’s not really a case of working harder, but working smarter. Generative AI offers the capability to work smarter by automatically analyzing, correlating, and reporting on large volumes of data.
Auguria leverages this power to create a knowledge layer between existing monitoring and alerting tools and the security analysts to distill, analyze, and automatically present actionable information to an analyst that otherwise would require significant manual effort.
This is an excellent application of generative AI – one that adds value to the existing software environment, simplifying complex triaging processes, shortcutting the remediation process, and improving the productivity of security analysts.
Copyright ©2026 Intellyx BV. Auguria is an Intellyx customer. Intellyx retains final editorial control of this article. No AI was used to write this article. Image source: Google Gemini.