I jokingly referred to last year’s Global Privacy Summit as having the theme “AI Is Everything Everywhere All At Once,” but it’s not that far from the truth. Generative AI (genAI) has reached peak hype, with Forrester data showing that only 7% of US online adults said they’ve never heard of genAI, but AI continues to evolve and become more intelligent.

One area of innovation is the rise of AI agents, which take the decisioning power of AI and layer on an action component. These agents are capable of not just observing and identifying patterns but also taking action. This development landed AI agents as one of the top 10 emerging technologies of 2024.

But like all things AI, potential comes with risk. As AI becomes more agentic, making decisions on behalf of businesses and eventually consumers, the risks will grow more complex. In the not-so-distant future, when consumers set up digital doubles to act on their behalf, the identity and fraud landscape will become rockier: How do you ensure that digital doubles are representing real people? And for consumers, how do you ensure that the other agents your digital double interacts with are legitimate businesses and not scams?

AI agents are nascent today, but given the speed of change in AI capabilities, businesses must keep an eye on both the trends and risks associated with AI agents. I hope you’ll join me at Forrester’s Security & Risk Summit, where I’ll dive deeper into the five different use cases that AI agents will tackle — spanning back-end business use cases to consumer-facing and even consumer-owned — and highlight the opportunities and risks of each. The Summit is December 9–11; in the meantime, keep an eye out for a new report “The State of AI Agents, 2024,” which will publish this fall, and set up a guidance session for a closer look.