Konabos

Your AI Agents Are Running Without Guardrails. Auth0 Has an Answer

Akshay Sura - Partner

11 Mar 2026

Share on social media

Over the last year, AI agents have moved from experiments to real work. They write code, query internal systems, pull customer data, and trigger workflows across the APIs that run the business.

That shift changes the conversation. The question is no longer whether AI is useful. The real question is whether AI agents can be trusted inside production environments.

This is quickly becoming one of the biggest challenges in enterprise AI agent security.

Agents are essentially a new kind of application client. Instead of a browser or mobile app calling APIs, the client is an autonomous system acting on a user's behalf. That means the same identity and access controls we apply to applications now need to apply to agents as well.

We recently spent time with the team at Auth0 walking through their Auth0 for AI Agents platform. What started as a product walkthrough quickly turned into a broader conversation about something we are seeing firsthand with our enterprise clients: AI agents are being deployed in production systems without basic governance. In many cases, there are no guardrails at all.

The Real Problem Is Not AI

Most conversations about AI security focus on prompt injection or hallucinations. Those risks are real. But in most enterprises, the more immediate problem is much simpler. Agents are connecting to systems they should not be touching, using credentials that were never designed for LLM access, and operating without identity, authorization, or traceability.

We have a client right now where three separate teams are building AI agents independently. One team is wiring agents directly into their ERP system. Another built a tool on Replit that pulls live product data from Elasticsearch and a backend database. The third team's position was simple: "We're just building brochureware sites, so what's the harm?"

Meanwhile, none of these teams had basic safeguards in place. No authentication tied to real user identities. No scoped permissions for what the agents could access. No approval process for sensitive operations. No audit trail showing what the agent actually did.

What makes AI agents different from traditional integrations is how quickly they bypass existing architecture review processes. A developer can wire an LLM to an ERP API in an afternoon, skipping the governance steps that would normally accompany a new system integration. When we raised the issue with one group, the response was blunt: "They're letting us do it, so I'll do whatever I want."

Multiply that mindset across departments, and the outcome becomes predictable. Eventually, something breaks. And when it does, nobody can explain how or why.

The real problem is not the AI model. It is unmanaged access.

Why Identity Matters for AI Agents

A useful way to think about AI agents is as a new category of digital worker. Like employees, they need identity, permissions, oversight, and an audit trail. Without those controls, organizations are effectively giving an invisible worker access to critical systems.

Traditional software solved this problem years ago. Web and mobile applications authenticate users, obtain tokens, and call APIs with clear permissions. Every request is traceable back to a user. AI agents should follow the same pattern.

This is the core idea behind Auth0 for AI Agents. Instead of inventing a new security model for AI, the platform extends existing identity and authorization infrastructure to agent driven workflows. Four capabilities in particular stand out in real enterprise scenarios.

Identity for AI Agents

The first requirement is basic but often overlooked. Agents need to know who the user is. When someone interacts with an AI assistant, that interaction should carry the same identity context as any other application request.

Auth0 handles this through its Universal Login system. Users authenticate the same way they already do for web applications, whether that is email and password, enterprise SSO through something like Microsoft Entra ID, or a passwordless option. Once authenticated, the agent receives an identity token that represents the user. Every API call the agent makes can include that token, allowing backend systems to enforce the same permissions they already use.

For organizations that already rely on Auth0 for customer or partner authentication, this is especially useful. One of our clients manages thousands of dealers across several brands through a shared portal platform. Dealers already authenticate through Auth0 to access order history, product data, and account information. Extending that same identity layer to an AI assistant means the dealer interacting with the agent inherits the same permissions they have in the portal interface. No new accounts, no shadow identity system, no hidden access paths.

Token Vault and Delegated API Access

The next challenge agents introduce is credential management. Agents often need access to external services such as calendars, messaging platforms, CRM systems, or internal APIs. The shortcut many teams take is embedding API keys directly into agent configurations. That approach causes problems very quickly. Keys become over permissioned, they rarely get rotated, and they end up stored in places they should never live.

Auth0 addresses this with Token Vault. Instead of exposing credentials to the agent directly, the vault manages OAuth tokens for third party integrations. A user authorizes access once, and Auth0 securely stores the credentials. When the agent needs to call an API, it requests a scoped token from the vault. The agent never handles raw credentials.

This removes a whole category of security risks. No API keys embedded in prompts or configuration files. No long lived tokens floating around development environments. No blanket access to every endpoint an API exposes.

Human Approval for Sensitive Actions

AI agents are useful because they automate work. But full autonomy for every action is irresponsible.

Scheduling a meeting is harmless. Changing financial data is not.

Auth0 supports human approval workflows through asynchronous authorization. When an agent attempts a sensitive operation, it can pause and request approval from the user. The user receives a notification describing exactly what the agent wants to do, whether that is approving a refund, modifying pricing, or triggering an order workflow. The user approves or rejects the action, and the agent proceeds accordingly.

This approach allows organizations to benefit from automation while still keeping humans responsible for high impact decisions.

Fine Grained Authorization for AI Retrieval

Many enterprise AI systems rely on retrieval augmented generation. The agent queries a search index or vector database, retrieves relevant documents, and then uses those documents to generate a response. Without authorization controls, this retrieval layer becomes a major risk. An agent might surface documents that the user should never see.

Auth0 addresses this through Fine Grained Authorization. Instead of relying solely on roles, the system models relationships between users, resources, and actions. When an agent performs a search, the results can be filtered based on what the user is actually allowed to access. That means a dealer asking about their order history sees only their orders. A support agent sees documentation relevant to their product line. Internal financial information remains restricted to authorized staff.

For organizations dealing with regulated or multi-tenant data, this type of control is essential.

The Reality of Enterprise AI Adoption

The uncomfortable truth is that AI agents are already inside most enterprise architectures. Developers are wiring agents into ERP systems, search indexes, APIs, and internal tools faster than governance frameworks can adapt.

Trying to ban AI tools rarely works. We have watched developers at heavily regulated organizations switch to personal machines, generate code or workflows externally, and then bring the results back inside corporate environments. You can't stop people from using AI tools. But you can provide a better path.

Identity infrastructure provides one of the most practical ways to introduce guardrails without slowing teams down.

Questions Every Architecture Team Should Ask

If your organization is building AI agents, or if your teams are building them whether you know it or not, there are a few questions worth asking now.

Do your agents know who is using them? Without identity, an agent operates in a vacuum.

How are your agents accessing external systems? If the answer involves static API keys or shared credentials, there is a security gap.

Which operations require human approval? Not every task needs oversight, but high-impact actions absolutely do.

Are your agents respecting data access policies? If an agent can retrieve any document regardless of user permissions, you have a data leakage problem.

Auth0 for AI Agents addresses these questions by applying familiar identity and authorization patterns to agent driven systems. The platform supports major frameworks, including LangChain, LlamaIndex, Vercel AI SDK, and Cloudflare Agents, and their Auth for MCP capability is in early access for teams working with the Model Context Protocol.

The agents are already out there. The real question is whether they are operating with the same security standards you expect from any other application touching your data.

Share on social media

Akshay Sura

Akshay Sura

Akshay is a ten-time Sitecore MVP and a two-time Kontent.ai. In addition to his work as a solution architect, Akshay is also one of the founders of SUGCON North America 2015, SUGCON India 2018 & 2019, Unofficial Sitecore Training, and Sitecore Slack.

Akshay founded and continues to run the Sitecore Hackathon. As one of the founding partners of Konabos Consulting, Akshay will continue to work with clients, leading projects and mentoring their existing teams.


Subscribe to newsletter