Securing the Future of Agentic AI: Governance, Cybersecurity, and Privacy Considerations

MKaganovich
Staff

No longer relegated to the realm of sci-fi, we’re on the cusp of having AI-powered agents that can act as personal assistants, cybersecurity defenders, or supply-chain specialists. The list goes on and the use cases are varied and myriad, in many ways limited only by our imaginations as we explore the transition from the generative to the agentic AI era. 

Sci-fi evangelists and futurists have long imagined an AI that can work and act intelligently with advanced execution capabilities. New developments in agentic AI are making this a reality. With supercharged reasoning and execution, these AI systems are changing how humans and machines interact and work together. The potential is huge - enabling productivity, innovation, and insights. But, there are risks to consider, which we delve into further below.

 

AI agents

We broadly characterize AI agents as software systems that use AI to pursue goals and complete tasks on behalf of users. They show reasoning, planning, and memory and have a level of autonomy to make decisions, learn, and adapt. Their capabilities are made possible in large part by the multimodal capacity of generative AI enabled by large language models, such as Gemini

AI agents can process multimodal information, converse, reason, and make decisions. They can learn over time and facilitate transactions and business processes, working with other agents to coordinate and perform more complex workflows.

Over the last few years, we’ve become accustomed to genAI-enabled capabilities, which we interact with as standalone applications or as part of product offerings to support us in performing tasks by responding to our prompts. The momentum in implementation has been incredible, with 601 genAI real world use cases announced at Google Cloud Next 2025, a six-fold increase from just one year before! 

This year, we’re starting to see the evolution from AI-enabled capabilities to AI assistants, AI agents, and ultimately multi-agent systems. They can reason and to varying degrees, take action on the users' behalf to varying degrees of autonomy, handle complex tasks and workflows, and adapt and improve their performance over time. 

 

Use cases

Take, for example, enterprise use cases where customer service AI agents deliver personalized customer experiences by understanding customer needs, answering questions, resolving customer issues, or recommending the right products and services. They work seamlessly across multiple channels including the web, mobile, or point of sale, and can be integrated into product experiences with voice or video. Similarly, consider security AI agents which can strengthen your organization’s security posture by mitigating attacks or increasing the speed of investigations. They can oversee security across various surfaces and stages of the security life cycle including prevention, detection, and response. 

From a personal use perspective, consider an AI agent that can serve as a deep research agent, exploring complex topics on your behalf and synthesizing information across multiple sources. Likewise, AI agents can support idea generation, helping you innovate by autonomously developing novel ideas and evaluating them to find the optimal solution, which is then presented for your consideration. These are just some of the possibilities opened up by the coming era of agentic AI. 

What’s common to these scenarios is that the AI agent’s usefulness and successful operation hinges on its integration into workflows and interconnectedness to various data sources, at least some of which are highly likely to contain sensitive information. Today, many agentic AI use cases are still only conceptual, but their potential is apparent. And while we may not be able to foresee all possible scenarios and outcomes, the criticality of security and data privacy is already clear.

Similar to other transformative technologies, a singular solution won't fully address the risks of agentic AI misuse, whether accidental or deliberate. Given the rapid advancements in this field, proactive consideration of implementation strategies and robust controls is essential. The increasing autonomy of AI systems inherently elevates their susceptibility to manipulation, adversarial attacks, and potential systemic failures.

 

Assessing AI agent risk: key considerations 

The deployment of complex and interconnected AI agents raises significant considerations regarding cybersecurity, privacy and data protection, and governance and oversight. We outline these below, though this is by no means an exhaustive list.

1. Governance & Oversight

Accountability and oversight are always key considerations, but they take on increased importance given the agentic AI’s ability to act independently and execute actions that may bind the individual or organization on whose behalf they’re being taken. Given this high standard, the following are important points to consider: 

  • Is the AI agent suitable for the intended use case? Can it perform the intended task reliably across the range of expected deployment conditions? 
  • If the user’s intent is unclear, or can be interpreted in several ways, does the AI agent have the capability to seek clarification prior to executing the probabilistic next step based on its interpretation of the prompt?
  • Are clear lines of responsibility and accountability defined for both internal and external users?
  • What technical and procedural steps are needed to establish guardrails such that the AI's actions remain within acceptable boundaries? 
  • Are the AI agent’s interactions clearly documented and controlled at every stage, such that the output is monitored based on pre-defined boundaries that enforce policies around data access and visibility?
  • What mechanisms are in place to enable human intervention to, in certain high risk circumstances, review proposed steps before they’re taken or to override decisions?

2. Cybersecurity

Given their interconnectivity and access to sensitive information, AI agents are likely to be prime targets for attackers. Compromised AI agents could be used to launch attacks or manipulate data. Attackers can use adversarial AI techniques to deceive or manipulate agentic AI systems such as by creating deceptive data or exploiting vulnerabilities in the AI's algorithms. While these risks pertain to existing AI systems, the attack surface and vectors increase in an agentic AI ecosystem. Adversarial use cases aside, even in normal operations, enterprise AI agents face security risks stemming from prompt injection attacks, unauthorized data access, and generating inappropriate content.

In evaluating agentic AI security controls, consider the following:

  • Are current cybersecurity detection and risk mitigation strategies aligned with agentic AI functionality and the resultant change in attack surface and points of entry?
  • Are identity controls configured for or on behalf of particular users such that their access privileges are consistent with the users’, inclusive of authentication and authorization? Are relationships between the user and AI agent mapped as a one-to-one, or one-to-many? If the latter, how does this impact model oversight?
  • Does the current incident response plan take agentic AI into consideration, and if not, what changes are needed?
  • How has supply chain risk changed given the increased interconnectivity between applications, data sources and AI models? How will current oversight mechanisms need to be updated as a result?

3. Privacy and Data Protection

AI agents require access to myriad data sources to operate effectively. This raises concerns about the collection, storage, access and usage of sensitive data. Consider the following:

  • What type of data will the agent actually or potentially need? Is sensitive data adequately protected by confining the agent activity within secure perimeters to prevent data exfiltration?
  • Are mechanisms in place to identify and preclude the AI agent from repurposing data without explicit authorization, such that for instance, when a user provides information for a particular purpose, it’s not then used for another purpose?
  • What controls are in place to prevent data leakage? How can privacy enhancing technologies be leveraged here as a mitigant?
  • Did the user / customer receive adequate notice and disclosure regarding how their data may be accessed and used by the AI agent, in line with applicable laws and regulations?
  • Which preventative and detection controls may need to be implemented to verify whether AI agents are complying with relevant data protection laws and regulations, including those pertaining to digital sovereignty, where applicable?

 

Next steps

As AI agent use case development and implementation is still in nascent stages, it’s important to start thinking through how these considerations can be addressed, so that organizations can harness the transformative potential of AI agents while mitigating potential risks and building trust with users and stakeholders. Importantly, consider the security measures that apply to the AI infrastructure overall, as well as to each of the applications and data sources the AI agent would need to access. 

As a conceptual starting point, irrespective of what form of AI is being implemented, good security hygiene is paramount. Take a look at Google’s Secure AI Framework and best practices for securing the data, model, application and infrastructure. 

Further, as you build and manage AI agentic systems, the interconnectivity amongst AI agents necessitates a standardized communication mechanism to enable interoperability. Google was proud to announce the Agent2Agent Protocol, noting “this first-of-its-kind open standard enables AI agents built by different vendors or on different frameworks to securely communicate, exchange information and coordinate actions across various enterprise platforms.” 

7 0 17.3K