Back in December, I wrote this blog post predicting that agentic AI would be the defining cybersecurity buzzword of 2025. It wasn’t just a trend-watching exercise—it reflected a more profound shift we were already seeing in how AI agents could meaningfully transform security operations.
Now, just a few months later at the RSAC™ 2025 Conference, that prediction has become reality. In conversations with analysts at the conference, I found myself comparing agentic AI to the Internet. It’s no longer about whether you use it—of course you do. The question is: what are you using it for? That’s the real conversation we need to have. Because now that agentic AI is everywhere, the next step is figuring out how to evaluate it, apply it, and actually get value from it inside the SOC.
RSAC 2025: All Eyes on Agentic AI
The cybersecurity media covering this year’s RSAC Conference identified agentic AI as the defining theme, signaling a dramatic shift from theoretical discussions to real-world deployment.
- SiliconANGLE noted that agentic AI, alongside security replatforming, topped the agenda. Analysts emphasized that autonomous systems are expected to take over much of the “pedestrian” workload, requiring security leaders to reassess their teams’ expertise and strategies.
- ITPro observed an “all-in” vibe around AI for security, with companies showcasing purpose-built models and platforms. Concerns around scalability, control, and power usage tempered the excitement, but didn’t slow the trend.
- SC Media captured the industry shift well, saying that agentic AI has moved from “speculative potential to operational urgency.” Organizations must now focus on testing and managing these tools like any other critical system.
Taken together, the media coverage painted a clear picture: agentic AI is no longer an emerging trend—it has become table stakes. The discussion is shifting away from whether to adopt it and toward how to deploy and manage it responsibly and effectively.
From Shiny Object to Strategic Asset: How to Evaluate AI SOC Solutions
As agentic AI moves from concept to deployment, security leaders must be ready to assess their real-world impact. To help, we put together this AI SOC solution evaluation framework, highlighting the importance of:
- Escalation Rate: A meaningful metric measuring the percentage of alerts an AI system can investigate and resolve without human intervention. A high escalation rate to humans may indicate that the AI agent isn’t effective in making decisions or lacks the confidence or context to take action.
- Accuracy: Accuracy has two components. The first is correctly identifying threats, and the second is correctly dismissing threats as benign and non-threatening. If either of these components is out of whack, it’ll hurt your SOC’s outcomes.
- Average Investigation Time: This metric shows how effectively the AI agent reduces analyst burden. An agentic AI system should dramatically shrink the time it takes to investigate an alert from start to finish.
But beyond evaluation metrics, it’s also essential to consider the tools the agent can access within the platform. Ask AI SOC vendors if their agents can:
- Collect and analyze files, logs, command lines, and memory images
- Perform smart queries against IDP data
- Parse raw email data, scan attachments, analyze URLs, and IPs
- Correlate alerts to identify patterns and threats
- Leverage threat intelligence to contextualize activity
- Investigate logs across platforms and enrich findings
These deep, built-in capabilities distinguish surface-level triage from autonomous forensic-grade investigations. At Intezer, we’ve built our Autonomous SOC Platform on these capabilities, giving our AI agents access to unparalleled analysis tools—from memory forensics to genetic code analysis capabilities—ensuring their actions are powerful and precise.
What’s Next: Strategic Adoption Over Blind Faith
As CRN put it, AI is no longer optional. As we move beyond the RSAC hype cycle, security leaders must focus on three things:
- Discerning real capabilities from marketing gloss
- Establishing internal benchmarks for agentic AI performance
- Insisting on explainability, auditability, and human oversight
Agentic AI is not just a buzzword—it’s here to stay. For organizations to get the most value out of these solutions, a thoughtful evaluation framework that focuses on transparency, operational impact, and the actual capabilities of the deployed agentic AI tools is essential.
If you’re curious how Intezer’s approach to agentic AI stacks up, reach out or see it yourself.
