Defining the Agentic SOC
An agentic SOC deploys AI agents that can reason, plan, and execute investigation tasks with meaningful autonomy. These agents gather evidence from multiple sources, correlate findings, assess risk, and provide structured recommendations. The critical distinction from traditional automation is that agents can adapt their approach based on what they discover, rather than following rigid playbooks.
However, this is not the same as a fully autonomous SOC. In an agentic SOC model, human analysts remain in the loop for execution decisions. The AI does the heavy lifting of data gathering, context enrichment, and informing analysis. The human takes the final decision and makes the call on containment, escalation, and response. This is a deliberate design choice, not a limitation.
Agentic vs Autonomous: Why the Distinction Matters
The terminology matters because it reflects fundamentally different risk profiles. A fully autonomous SOC operates end-to-end without human intervention. An agentic SOC uses AI to augment human decision-making while retaining human oversight for actions that carry business impact.
For CNI operators, this distinction is critical. Consider the difference between an AI agent that investigates a suspected account compromise and presents findings for analyst review versus one that automatically disables accounts and isolates systems. Both might reach the same conclusion, but the blast radius of an incorrect automated response in an environment that is converging with operations could affect physical safety and service delivery.
Recent industry analysis suggests that while agentic SOC capabilities are maturing rapidly, the consensus among security leaders is that fully autonomous operations are still one to two years away from becoming standard practice. The technology exists, but the governance frameworks, trust models, and regulatory alignment are still developing.
The Glass Box Approach
One of the most significant concerns with AI in security operations is the black box problem. If an AI reaches a conclusion but cannot explain how it got there, how does a CISO justify that decision to a regulator or board? How does an analyst learn from the investigation? How do you identify when the AI has made an error?
An effective agentic SOC implementation must be a glass box, not a black box. Every decision should be traceable. Every piece of evidence should be auditable. Every recommendation should come with an explanation of the reasoning that produced it. This transparency is not just good practice; for CNI operators subject to frameworks like the NCSC Cyber Assessment Framework, it is essential for demonstrating that your security operations meet the required indicators of good practice.
This transparency extends beyond your internal team. As organisations mature, many move toward co-managed SOC models where visibility is shared between provider and customer. Gartner has recognised this trend, listing Bridewell as a representative provider for two consecutive years. In a co-managed environment, the same explainability that supports your internal governance must also enable your security team to understand and validate the work being done on your behalf.
At Bridewell, we have built our agentic SOC capabilities around this principle. Full traceability means analysts, whether ours or yours, can see exactly what evidence was gathered, what logic was applied, and why a particular recommendation was made. Our AI infrastructure is privately hosted on sovereign cloud environments, with data controls that meet ISO 27001, ISO 27017, and SOC 2 Type II requirements. As managed service providers come into scope under the Cyber Security and Resilience Bill in 2026, these controls become even more critical.
A Progressive Trust Model
The path from agentic to autonomous does not need to be a binary switch. A well-designed agentic SOC can operate on a progressive trust model. Initially, all actions require human approval. The AI begins by providing summarisation and enrichment, giving analysts complete context before making recommendations. As confidence builds through validated outcomes, certain low-risk actions can be graduated to autonomous execution.
For example, after sufficient training and feedback, an agentic SOC might autonomously close alerts that have been consistently validated as false positives. But containment actions on production systems might always require human approval, regardless of AI confidence levels. The boundary between agentic and autonomous becomes tunable based on your organisation's risk appetite.
This approach has delivered measurable results. At Bridewell, by moving from traditional SOAR-based automation to an agentic AI investigation workflow, we have reduced mean time to respond for account compromise investigations from 29 minutes to under 9 minutes, with equal or greater accuracy than tier one and two analysts. The speed comes from AI handling evidence gathering and analysis in parallel; the accuracy comes from human oversight of the final decision.
What This Means for CNI Operators
If you are evaluating AI capabilities for your security operations, the questions to ask are not just about what the technology can do, but how it does it. Can you trace the reasoning behind recommendations? Can you tune the boundary between assisted and automated? Does the solution support your compliance requirements, or create new risks? And if you operate a co-managed model, does your provider give your team the same visibility and understanding that their own analysts receive?
An agentic SOC represents a significant step forward in security operations capability, but it needs to be implemented with governance and transparency at its core. For organisations where the consequences of getting it wrong extend beyond data to physical safety and essential services, the human-in-the-loop model is not a compromise. It is the responsible approach to deploying AI in security operations.
The agentic SOC is not about replacing human expertise. It is about amplifying it, at the speed that modern threats demand, with the transparency that critical infrastructure requires.