Google Cloud Next 2025’s (full takeaway blog here) developer keynote offered a detailed look at the company’s latest AI innovations, with a particular focus on agent technology and developer tools. Cohosts Richard Seroter and Stephanie Wong brought both technical insight and their signature energy to the stage, keeping the audience engaged with well-timed humor as they guided attendees through a series of practical demonstrations that built upon each other to showcase the potential of these technologies.

Agent Framework Takes Shape

The keynote opened with Brad Calder framing Google’s strategy around three key areas: agentic applications, developer productivity tools, and Gemini models. What followed was a series of interconnected demonstrations centered around a home renovation scenario, showcasing how multiple specialized agents could collaborate on complex tasks.

The newly released Agent Development Kit (ADK) appears designed to lower the barrier to entry for creating AI agents. Dr. Fran Hinkelmann demonstrated its three core components: instructions defining an agent’s goal, tools enabling actions, and a model handling large language model (LLM) tasks. The demonstration showed an agent generating a professional renovation proposal from floor plans and customer requirements.

Building on this foundation, Dr. Abirami Sukumaran presented a multiagent system in which specialized agents for proposals, permits, and material ordering work together. When one agent encountered an error, she demonstrated cloud investigations, which provided automated debugging assistance.

Developer Choice Emphasized

Google stressed flexibility throughout the keynote, with Debi Cabrera showcasing Gemini integration across popular IDEs including Windsurf, Cursor, and IntelliJ. She also highlighted Vertex AI’s Model Garden, which supports models from other providers including Meta, Anthropic, and Mistral.

Real-World Applications

In one of the more interesting demonstrations (I’m a baseball fan), MLB hackathon winner Jake DiBattista presented an application that used Gemini to analyze baseball pitching mechanics. His demo analyzed both professional pitcher Clayton Kershaw’s pitching and, to humorous effect, Richard Seroter’s more amateur (but better than what I could muster!) efforts. The application demonstrated how computer vision capabilities previously requiring specialized hardware are now accessible to developers with affordable tools.

The Kanban Board: Bridging AI Hype And Real Product Team Workflows

Perhaps the most significant announcement was Scott Densmore’s preview of a Kanban board interface for Gemini Code Assist. Unlike the chat interfaces that have dominated AI coding assistants to date, this approach aligns with how development teams actually work. The board enables developers to assign tasks to Code Assist including bug fixes, code reviews, and prototype development. This potentially offers a more intuitive workflow for developers than conversation-based interactions.

Data Science Access Expands

Jeff Nelson demonstrated a Data Science Agent that transformed complex data analysis into an approachable process. With simple prompts, the agent generated forecasting models using BigQuery, Serverless Spark, and new foundation models such as TimesFM. This culminated in a deployed data app — suggesting that specialized AI agents may someday enable less technical users to build advanced capabilities.

As the industry continues to evaluate the practical impact of these tools, the keynote made a compelling case that agent-based approaches might meaningfully change how software development and data analysis teams operate together. The demonstrations suggested that Google is working to integrate AI assistance into existing development workflows rather than requiring teams to adapt to entirely new paradigms.