No longer relegated to the realm of sci-fi, we’re on the cusp of having AI-powered agents that can act as personal assistants, cybersecurity defenders, or supply-chain specialists. The list goes on and the use cases are varied and myriad, in many ways limited only by our imaginations as we explore the transition from the generative to the agentic AI era.
Sci-fi evangelists and futurists have long imagined an AI that can work and act intelligently with advanced execution capabilities. New developments in agentic AI are making this a reality. With supercharged reasoning and execution, these AI systems are changing how humans and machines interact and work together. The potential is huge - enabling productivity, innovation, and insights. But, there are risks to consider, which we delve into further below.
We broadly characterize AI agents as software systems that use AI to pursue goals and complete tasks on behalf of users. They show reasoning, planning, and memory and have a level of autonomy to make decisions, learn, and adapt. Their capabilities are made possible in large part by the multimodal capacity of generative AI enabled by large language models, such as Gemini.
AI agents can process multimodal information, converse, reason, and make decisions. They can learn over time and facilitate transactions and business processes, working with other agents to coordinate and perform more complex workflows.
Over the last few years, we’ve become accustomed to genAI-enabled capabilities, which we interact with as standalone applications or as part of product offerings to support us in performing tasks by responding to our prompts. The momentum in implementation has been incredible, with 601 genAI real world use cases announced at Google Cloud Next 2025, a six-fold increase from just one year before!
This year, we’re starting to see the evolution from AI-enabled capabilities to AI assistants, AI agents, and ultimately multi-agent systems. They can reason and to varying degrees, take action on the users' behalf to varying degrees of autonomy, handle complex tasks and workflows, and adapt and improve their performance over time.
Take, for example, enterprise use cases where customer service AI agents deliver personalized customer experiences by understanding customer needs, answering questions, resolving customer issues, or recommending the right products and services. They work seamlessly across multiple channels including the web, mobile, or point of sale, and can be integrated into product experiences with voice or video. Similarly, consider security AI agents which can strengthen your organization’s security posture by mitigating attacks or increasing the speed of investigations. They can oversee security across various surfaces and stages of the security life cycle including prevention, detection, and response.
From a personal use perspective, consider an AI agent that can serve as a deep research agent, exploring complex topics on your behalf and synthesizing information across multiple sources. Likewise, AI agents can support idea generation, helping you innovate by autonomously developing novel ideas and evaluating them to find the optimal solution, which is then presented for your consideration. These are just some of the possibilities opened up by the coming era of agentic AI.
What’s common to these scenarios is that the AI agent’s usefulness and successful operation hinges on its integration into workflows and interconnectedness to various data sources, at least some of which are highly likely to contain sensitive information. Today, many agentic AI use cases are still only conceptual, but their potential is apparent. And while we may not be able to foresee all possible scenarios and outcomes, the criticality of security and data privacy is already clear.
Similar to other transformative technologies, a singular solution won't fully address the risks of agentic AI misuse, whether accidental or deliberate. Given the rapid advancements in this field, proactive consideration of implementation strategies and robust controls is essential. The increasing autonomy of AI systems inherently elevates their susceptibility to manipulation, adversarial attacks, and potential systemic failures.
The deployment of complex and interconnected AI agents raises significant considerations regarding cybersecurity, privacy and data protection, and governance and oversight. We outline these below, though this is by no means an exhaustive list.
Accountability and oversight are always key considerations, but they take on increased importance given the agentic AI’s ability to act independently and execute actions that may bind the individual or organization on whose behalf they’re being taken. Given this high standard, the following are important points to consider:
Given their interconnectivity and access to sensitive information, AI agents are likely to be prime targets for attackers. Compromised AI agents could be used to launch attacks or manipulate data. Attackers can use adversarial AI techniques to deceive or manipulate agentic AI systems such as by creating deceptive data or exploiting vulnerabilities in the AI's algorithms. While these risks pertain to existing AI systems, the attack surface and vectors increase in an agentic AI ecosystem. Adversarial use cases aside, even in normal operations, enterprise AI agents face security risks stemming from prompt injection attacks, unauthorized data access, and generating inappropriate content.
In evaluating agentic AI security controls, consider the following:
AI agents require access to myriad data sources to operate effectively. This raises concerns about the collection, storage, access and usage of sensitive data. Consider the following:
As AI agent use case development and implementation is still in nascent stages, it’s important to start thinking through how these considerations can be addressed, so that organizations can harness the transformative potential of AI agents while mitigating potential risks and building trust with users and stakeholders. Importantly, consider the security measures that apply to the AI infrastructure overall, as well as to each of the applications and data sources the AI agent would need to access.
As a conceptual starting point, irrespective of what form of AI is being implemented, good security hygiene is paramount. Take a look at Google’s Secure AI Framework and best practices for securing the data, model, application and infrastructure.
Further, as you build and manage AI agentic systems, the interconnectivity amongst AI agents necessitates a standardized communication mechanism to enable interoperability. Google was proud to announce the Agent2Agent Protocol, noting “this first-of-its-kind open standard enables AI agents built by different vendors or on different frameworks to securely communicate, exchange information and coordinate actions across various enterprise platforms.”