DeNexus Blog - Industrial Cyber Risk Quantification

AI Agents in Cybersecurity and Cyber Risk Management: 5 Critical Trends for 2026

Written by DeNexus | Jan 13, 2026 10:30:00 AM

Enterprises are moving beyond copilots that summarize and suggest. Google Cloud's AI Agent Trends 2026 report describes a fundamental shift toward AI agents that can pursue goals through multi-step workflows—coordinating tools, taking actions, and updating plans as new information arrives. In cybersecurity, the report frames this as a move from alerts to action, with security operations becoming one of the headline domains where agents will have near-term impact. 

The timing is not accidental. The report anchors the case in the operational reality of the modern SOC (security operations center): a high-volume environment where analysts are inundated by alerts and data. According to the survey of 3,466 enterprise decision makers, 82% of analysts are concerned or very concerned that they may be missing real threats or incidents due to the volume of alerts and data they face—an "alert fatigue" problem that traditional automation has not fully solved.  

 

Where AI Agents Land First: Security Operations at Scale 

Security Operations Is Already a Production Use Case 

The report includes adoption signals suggesting agent deployments are no longer confined to pilots. Data from Google Cloud's ROI of AI 2025 report reveals that 52% of executives in generative AI-using organizations have AI agents in production, with examples including security operations. More specifically, 46% of executives at organizations with agents in production are adopting agents for security operations and cybersecurity.  

 

Why Agents, Not Just More SOAR? 

The report contrasts traditional Security Orchestration, Automation and Response (SOAR) platforms' incremental gains with an agent model that can reason, act, observe, and adjust as evidence changes—designed to manage investigations dynamically rather than executing a static playbook. This represents a fundamental shift from rule-based automation to adaptive, context-aware response systems.  

 

The "Agentic SOC" Operating Model 

The centerpiece security concept is an agentic SOC—a system of task-based agents orchestrated toward a shared outcome. The report depicts a semi-autonomous cycle triggered by an alert, with structured human oversight points (for example, escalation and recommendation), and AI-driven functions for detection and analysis.  

The report breaks down SOC work into agent-aligned roles: 

  • Triage and investigation 
  • Threat research and hunt 
  • Malware analysis 
  • Detection engineering 
  • Response 

To make this work across the broader security stack, the report emphasizes tool and context integration—citing the Model Context Protocol (MCP) as an enabler for building workflows across tools (including third-party tools) and Agent-to-Agent (A2A) style interoperability for multi-agent coordination.  

 

What Changes for Analysts 

As agents absorb the "always-on" alert monitoring burden, the report anticipates analysts shifting toward higher-value work: threat hunting, supervising agents, and long-horizon defense. It calls out supervising agents as a real job function—tuning "rules of engagement" and reviewing the performance of automated responses.  

 

A Vendor Example: Agent Coordination in SOC Workflows 

The report cites Torq's approach as an example of an AI SOC analyst coordinating specialized agents across the incident lifecycle. Torq's Socrates platform, running on Google Cloud infrastructure, achieves 90% automation of Tier-1 analyst tasks (auto-remediated without human involvement), 95% reduction in manual tasks, and 10x faster response times.  

 

Beyond the SOC: Vulnerability Discovery and Offensive Validation 

Vulnerability Discovery and Code Security Improvement 

Looking ahead, the report states that in 2026, agents will increasingly help with vulnerability discovery, along with alert triage and investigation. It cites Google DeepMind's CodeMender as an example of an agent that improves code security automatically. Early results demonstrate CodeMender's ability to find new zero-day vulnerabilities in well-tested software.  

As organizations adopt AI-powered vulnerability discovery, the challenge shifts from simply identifying vulnerabilities to prioritizing remediation based on true business risk. Solutions like DeNexus' Quantified Vulnerability Management (QVM) complement agent-driven discovery by translating vulnerability data into quantified financial impact, enabling security teams to focus resources on the exposures that pose the greatest risk to business operations. 

 

Attack Surface Management and Penetration Testing 

The report also points to agent usage on the offensive and security validation side. It describes Specular, an offensive cybersecurity platform that builds AI agents using the Gemini 2.5 Pro model to automate attack surface management and penetration testing. Specular's platform automates traditional workflows to identify, assess, and remediate cybersecurity issues, helping enterprises quickly prioritize and respond to threats.  

 

The CISO View: Risk Economics, Not Novelty 

The report frames the CISO mandate in economic terms: achieving the greatest reduction in risk per dollar spent. In that context, it positions agents as essential because they can detect and respond faster to enterprise risks—and elevate SOC analysts from tactical responders to strategic defenders.  

The report also includes a clear cautionary note: AI is transforming both offense and defense, and AI infrastructure—including models, data, and agents—expands an enterprise's attack surface. It argues security teams must become "bilingual" in AI and security to manage this transition effectively.  

 

Cyber Risk Management: Governance, Compliance, Accountability 

Framework-Driven Risk Control for Agentic Systems 

As autonomy increases, governance moves from policy statements to enforceable controls. The report points to the Expanded Secure AI Framework 2.0 as a way to address rapidly emerging risks posed by autonomous AI agents. This framework provides a defensive standard for organizations to secure AI infrastructure, including models, data, and agents, against both traditional and advanced AI-specific threats. 

Framework-driven risk control for agentic systems focuses on several technical and organizational pillars: 

  • Model Context Protocol (MCP) to create standardized connections for AI applications, allowing agents to connect safely with managed databases and share common security data sources 
  • Agent-to-Agent (A2A) protocol, an open standard that allows interoperable orchestration between agents from different developers or frameworks 
  • Continuous training of agents on evolving real-world insights from security experts 
  • Human-in-the-loop oversight where humans act as orchestrators and final decision-makers, defining rules of engagement and performing performance reviews of automated responses 
  • Data hygiene training for employees to understand what data can and cannot be fed into AI tools 

 

Compliance Workflows Become Agent-Driven and Auditable 

In regulated sectors, the report anticipates multi-step agentic compliance systems that can monitor regulatory changes, identify impacted policies, update internal workflows, and create a complete audit chain—an operating model shift that brings compliance closer to continuous control management.  

 

Training as a Risk Control 

The report also treats security as an enterprise-wide responsibility as agent-accelerated threats become more sophisticated—calling for employee training on what data can and cannot be used in AI tools, and how to recognize social engineering enhanced by AI.  

 

Human Oversight Remains Central 

To balance speed and safety, the report emphasizes that humans remain the orchestrators and final decision-makers in agentic systems. Every employee—from entry-level analysts to senior vice presidents—becomes a human supervisor of agents, with core responsibilities to: 

  1. Delegate mundane or repetitive tasks - Identify which tasks are best suited for an agent and assign them. 
  1. Set goals - Clearly define the desired outcome for the agent. 
  1. Outline strategy - Use their human judgment to guide the agents and make the final, nuanced decisions that AI can’t. 
  1. Verify quality - Act as the final checkpoint for quality, accuracy, and tone. 

Even entry-level employees could have supervision and management responsibilities, over AI agents that help them accomplish more. 

 

Opportunities This Creates for Cyber Risk Quantification 

The report's emphasis on "risk reduction per dollar" and faster detect-and-respond creates a practical opening for cyber risk quantification to move from periodic estimates to metrics-backed measurement. If agents compress time-to-triage and time-to-contain in the SOC, security leaders can translate those operational deltas into quantified impact—particularly for loss scenarios where dwell time and response speed drive severity (for example, containment before widespread encryption or exfiltration). 

 

For organizations implementing agentic security operations, platforms like DeNexus' Cyber Risk Quantification (CRQ) enable security leaders to translate improved response metrics into financial terms. By modeling how faster detection and response reduces the probability and magnitude of loss events, CRQ solutions help demonstrate the business value of agent deployments in terms executives understand: dollars of risk reduced per dollar invested. 

In parallel, the report's direction toward agentic compliance systems with a complete audit chain supports stronger quantification inputs. Continuous evidence of control operation, workflow updates, and traceable approvals can reduce uncertainty in risk assumptions and improve the defensibility of quantified outcomes—especially when paired with the governance questions the report raises around authorization, accuracy, and accountability for agent-initiated actions. In other words, the same guardrails needed to operate agents safely also create a richer, more reliable dataset for quantifying control effectiveness and tracking risk movement over time. 

 

What to Watch as This Becomes Mainstream 

The report closes by naming the move to agentic security operations as one of the critical shifts expected in 2026. If that trajectory holds, the differentiator will not be whether an organization "uses agents," but whether it can scale them with: 

  1. Disciplined governance and auditability – Establishing clear rules of engagement, escalation protocols, and complete audit chains for agent actions 
  2. Well-defined rules of engagement – Training agents on continuously evolving security insights while maintaining human oversight at critical decision points 
  3. A risk program that can quantify and communicate the business value – Translating faster, more consistent security outcomes into financial metrics that demonstrate risk reduction per dollar spent 

As 88% of agentic AI early adopters report seeing positive ROI on at least one generative AI use case (Google Cloud AI Agent Trends 2026), the organizations that will lead in 2026 will be those that balance agent supervision, autonomy and the governance infrastructure needed to deploy them at enterprise scale.