Cybersecurity’s Latest Buzzword Has Arrived: What Agentic AI Is And Isn’t
Cybersecurity vendors have come out of the woodwork in the past few months to announce their “agentic AI” innovations. These include vendors such as Swimlane, ReliaQuest, Dropzone AI, Intezer, and others.
Some are announcing legitimate agentic AI features, while others are renaming existing ML or generative AI features to catch the hype: The blob strikes again!
This has become further complicated as the definition and understanding of agentic AI capabilities have been as in flux as the rest of the generative AI market.
After significant research and careful consideration, Forrester released a report defining agentic AI: Agentic AI Is Rising And Will Reforge Businesses That Embrace It (client-only access). According to this research, agentic AI is:
Systems of foundation models, rules, architectures, and tools which enable software to flexibly plan and adapt to resolve goals by taking action in their environment, with increasing levels of autonomy.
Agentic AI Is A Subset Of AI Agents
To be explicitly clear: An AI agent is not the same thing as agentic AI. AI agents have been around for literal decades. Back when I was getting my computer engineering degree, I had to build an AI agent for an artificial intelligence class. The agent wasn’t anything crazy (certainly not to the level of generative AI) … it was a knowledge-based agent meant to understand and navigate the Wumpus world.
The concept of agentic AI and its implementation are far from new. The challenge is that the majority of AI agents that have been developed don’t operate without human intervention. AI agents such as Waymo, Tesla, Apple’s Siri, and Amazon’s Alexa all require some level of human input, with Waymo being the most advanced by far.
In contrast to AI agents, agentic AI is one or more AI agents that operate without human intervention. They learn and adapt to feedback and inputs to achieve a certain mission. Judges and critics are components that allow the system to perform more effectively, where the judge evaluates the output to ensure accuracy and the critic evaluates the output for specific flaws, biases, or ethical risks. In that way, agentic AI is a subset of AI agents that operate autonomously.
Agentic AI Enables More Complex Use Cases Than RAG
Agentic AI requires a different architecture than we see with retrieval-augmented generation (RAG), which only orchestrates pulling relevant information, not orchestrating a series of complex steps. Unlike agentic AI, RAG does not do significant planning or reasoning across information and cannot take adaptable action within enterprise environments. Agentic AI may use RAG as part of its capabilities, however.
Security Tools Are Using Agentic AI To Automate Triage
Agentic AI is being used to automate alert triage and some aspects of investigation in security tools. It has been particularly useful in automating triage and, in some cases, closing alerts related to phishing, though other use cases are on the horizon.
While this blog breaks down what agentic AI is, we aren’t able to touch in depth on securing generative AI here. For advice on how to prepare for and secure agentic AI adoption in the enterprise, check out the report, Top Recommendations For Your Security Program, 2025.
If you have more questions about how agentic AI is used in security tools, or if you want to talk about a particular vendor, book an inquiry or guidance session with me.