Skip to main content Skip to footer

BLOG

Strengthening AI agent security with identity management

5-MINUTE READ

January 20, 2025

The technological landscape is in a state of rapid flux, with AI steering the course of innovation. Among AI's most disruptive applications are AI agents—intelligent, autonomous systems engineered to execute tasks with minimal human involvement. These agents are driving transformative change across industries and their strategic importance is especially evident in the realm of cybersecurity.

At Accenture, we believe that agentic architectures and AI agents will go beyond mere task automation—they will transform the way businesses operate. According to Accenture’s Tech Vision 2025 survey, 96% of executives anticipate a moderate to significant increase in the use of AI agents by their organizations in the next 3 years. Additionally, 77% of executives agree AI agents will reinvent how their organization builds digital systems.

But the question remains: how can organizations ensure they are leveraging AI agents to their fullest potential while mitigating any associated risks?

With great power comes great responsibility

Organizations must prepare for the integration of AI agents into their workforce and marketplaces, treating them with the same level of security consideration as human employees. In our Tech Vision 2025 survey 78% of executives agree that digital ecosystems will need to be built for AI agents as much as for humans over the next 3-5 years. As these solutions gain access to critical systems and data, regulatory requirements for their management are expected to become as rigorous as those for human counterparts.

Deploying agents at scale across an organization requires a holistic approach that aligns and further advances a Zero Trust security model. Accenture’s AI Agent Zero Trust Model for Cyber Resilience presents a cutting-edge approach designed to enhance cybersecurity across various industries (Figure 1). This model operates on the core principle of "Trust Nothing, Verify Everything," which ensures that every component within an organization's network is continuously validated. The model is structured to secure AI systems comprehensively, incorporating four key areas such as:

  1. Secured Identity & Access Management
  2. Secured Workflow
  3. Secured AI Runtime
  4. Human in the Loop

By implementing this approach, companies from various industries can significantly bolster their defenses against cyber threats, ensuring robust security and resilience throughout their digital operations. This proactive and vigilant strategy is crucial in today's digital age, where cyber threats are increasingly sophisticated and pervasive.

This image shows the path for Accenture’s AI Agent Zero Trust Model for Cyber Resilience, and the key aspects powered by Accenture Security: Cyber Strategy, Cyber Resilience and Cyber Protection.
This image shows the path for Accenture’s AI Agent Zero Trust Model for Cyber Resilience, and the key aspects powered by Accenture Security: Cyber Strategy, Cyber Resilience and Cyber Protection.

Figure 1. Accenture’s AI Agent Zero Trust Model for Cyber Resilience

In this blog, we will further detail Secured Identity & Access Management.

Digital identity will be pivotal in governing access and orchestration, determining what these agents can access and their permitted actions within an organization. Auditors may soon require proof of how organizations manage AI agent access, similar to current human access audits. Additionally, a marketplace for external use is likely to appear, where organizations can sell access to their AI agents to others. Cybersecurity will be crucial in this context, ensuring secure authentication, credentialing and authorization processes so that these agents can safely operate in external environments.

The need for modern identity management

Unlike traditional applications, which typically operate within predefined parameters and require explicit instructions for every action, AI agents are designed to function with a high degree of autonomy. This means they have the capability to make decisions and take actions independently, guided by their predefined goals and the knowledge they acquire through continuous learning. This level of autonomy introduces a unique set of challenges, particularly in the realm of identity management.

Traditional methods of managing access and permissions, which are often sufficient for conventional software, fall short when applied to AI agents. Instead, a modern approach to identity management is required, one that is deeply rooted in trust and mirrors the way we manage human access both within and outside of organizations. This approach must account for the dynamic nature of AI agents, ensuring that their actions are not only efficient but also secure and aligned with the overarching objectives of the organization.

The scale of identities

There's a lot of buzz in the industry about how soon we'll have millions or billions of AI agents on the internet, changing the way businesses work.

Within an organization, the landscape of identities is also expansive and diverse, encompassing everyone from contractors and employees to AI agents and third-party business partners. Effective Identity and Access Management (IAM) is therefore crucial, as it impacts every aspect of an organization's operations. To ensure robust security, cost-effectiveness and business acceleration, a comprehensive IAM strategy is essential.

This is particularly true for AI agents, which require modern IAM capabilities to secure and govern their access, thereby accelerating their ability to fulfill their intended roles. As organizations continue to expand and diversify, the mandate to secure all identities will lead to an exponential increase in the number of identities that need to be managed. This growth underscores the importance of having the right access controls in place at the right time, ensuring that all entities within the organization can operate efficiently and securely.

Challenges with AI agent identity

AI agents, by design, will interact with and modify sensitive data, necessitating privileges that enable them to fulfill their designated roles and tasks. This level of access, however, presents a series of significant challenges.

  • Standing privilege and privilege creep: AI agents must not have static entitlements and roles that govern access, as they will continue to find creative ways to complete their tasks.

  • Credential management: AI Agents will have many credentials to manage access.  These credentials must be provisioned, rotated and de-provisioned frequently.

  • Regulatory requirements: While not currently regulated, AI agents will likely fall under future regulations, especially if they have access to financial systems and invoicing systems.

  • Agent communication: AI agents will communicate directly with humans and other agents, similar to how humans work together today.

To address these challenges organizations must implement and manage dynamic and context-aware access controls for AI agents, ensuring security and compliance.  Some of best practices include:

  • Zero Trust security model: Implement a Zero Trust security model that assumes no implicit trust and continuously verifies every request as though it originates from an open network. This includes verifying the identity, device and context of the request.

  • Context-aware access: Use context-aware access controls that dynamically adjust permissions based on real-time factors such as user location, device status and behavior. This approach helps in narrowing access based on contextual parameters, reducing the attack surface.

  • Ephemeral access: Implement just-in-time access to ensure that AI agents only have the necessary permissions for the duration of their tasks. This minimizes the risk of privilege creep and unauthorized access.

  • Lifecycle management: Manage the lifecycle of AI agents, including creation, modification and de-provisioning. Regularly review and update access controls to ensure they remain relevant and secure.

  • Credential management: Regularly rotate credentials, keys and certificates to maintain security. Use automated tools to manage and rotate these credentials to reduce the risk of human error.

Ensuring dynamic and secure access controls, efficient credential management practices, and preparing for future regulatory landscapes will be essential to harness the benefits of AI agents while mitigating associated risks.

To benefit from the advances of AI agents, organizations must move from traditional instruction-driven, predefined technology stacks to intention-based systems, powered by AI and generative AI, with a cognitive architecture that mimics human-like thinking and learning. This is the driving force behind the three new engineering principles we defined in Chapter 2 of our Reinventing with a Digital Core report. These principles are essential for an era characterized by deep generative AI integration, enabling machine operations and customization to meet specific industry needs.

Potential future regulations for AI agents

The potential future regulations for AI agents are anticipated to tackle a wide array of ethical, legal, safety and technical challenges to ensure the responsible development and deployment of AI systems. Several key areas are expected to be at the forefront of these regulatory efforts. Transparency and accountability will likely be paramount, with future regulations requiring AI systems to offer clear explanations for their decisions. This transparency is crucial for maintaining public trust and preventing the unethical use of AI. Additionally, robust measures will be necessary to safeguard data privacy and security, protecting against breaches and ensuring that AI systems do not infringe upon user privacy. Sector-specific regulations are also anticipated, with different industries potentially facing tailored regulations based on the risks posed by AI applications. For instance, the United States is expected to adopt a framework akin to the EU’s AI Act, which categorizes AI systems according to their risk levels.

Companies will need to establish comprehensive AI governance frameworks that encompass fairness, accountability, risk management, security, and data integrity. Furthermore, regulations such as the Algorithmic Accountability Act may mandate companies to conduct impact assessments on their AI systems to identify and mitigate biases, ensuring that AI technologies are developed and used in a manner that benefits society as a whole.

Navigating AI agent security

Securing AI agents is a complex but essential task. By leveraging modern IAM capabilities, organizations can ensure that their AI agents operate securely and effectively, driving business value while mitigating risks. As AI continues to evolve, so too must the approaches to securing these powerful tools. This will propel organizations forward, safely and securely.

WRITTEN BY

Damon McDougald

Global Cyber Protection Domain Lead

Daniel Kendzior

Managing Director – Global Data & AI Security Lead