Skip to content

TechChannels Network:      Whitepaper Library      Webinars         Virtual Events      Research & Reports

×
Artificial Intelligence (AI) Generative AI Cybersecurity

Q&A: SailPoint EVP Chandra Discusses the Next Evolution of Identity, Governance and AI security Gnanasambadam

Teri Robinson

May 12, 2026

Enterprises are rapidly moving from a human-centric model of security to one that must account for both humans and autonomous AI agents. As organizations deploy these agents across cloud environments, applications, and endpoints, they face a growing governance gap. Unlike traditional users, AI agents can act independently and at machine speed, often without clear ownership or consistent controls. As these non-human identities multiply—quickly outnumbering human users—organizations are being forced to rethink identity, access, and governance at a fundamental level, which Tech-Channels discussed at RSA with Chandra Gnanasambadam, EVP of Product and Chief Technology Officer at SailPoint.

To address this shift, SailPoint this week announced Agentic Fabric, a new solution designed to secure AI agents and other non-human identities at scale. Extending its Identity Security Cloud, Agentic Fabric provides a unified, identity-centric approach that connects discovery, visibility, governance, authorization, and protection in a single platform. By mapping every AI agent to its human owner, associated data, and systems, the platform gives organizations the context needed to enforce accountability, manage risk, and maintain control as AI adoption accelerates.

This evolution reflects a broader move toward adaptive identity—a real-time, zero-trust approach that replaces static governance models with dynamic, context-aware decisioning. “We are moving to a human plus AI world and the scale is dramatic,” Gnanasambadam said at the solution’s launch at Nasdaq Marketplace. Securing the age of autonomous agents requires not only new technologies, but an entirely new operating model—one that integrates identity, security, and AI governance into a unified framework capable of operating at scale in an increasingly autonomous world.

Q: Why is now such a critical moment for the security industry?

A. There’s no better time to be in the security industry. We feel like there are two fundamental tectonic shifts happening. One is that we are moving from a world of securing humans to securing humans plus agents. That ratio—and it’s important to think of them as a ratio—we believe today we are operating at a ratio of one human to 1,000 agents. And when I say agents, I mean everything—agents, the missions, the tools they access, and the credentials they use, all of that combined. And they’re growing that ratio. The second tectonic shift is we're moving from the world of static governance to real time.

Q. How is access governance evolving in this new environment?

A. The simplest example of that is everything you access inside your company is what we would call static access or static privilege, where it was granted to you. Today, most access inside a company is static—privileges are granted, and you keep rotating your passwords, your role changes or you leave the company. In a real-time world, our view is even when you are enrolled or employed by the company, permissions should be based on what you are accessing. For example, if you are accessing sensitive content on public Wi-Fi here at this hotel we’re sitting in, should you have the same permissions? Maybe email access is fine as read-only, but critical systems should require a secure environment. That’s what we mean by moving from static to real-time access.”

Q. What makes agentic AI fundamentally different from previous technology shifts like cloud?

A. With cloud, you still had the same humans doing the same jobs. You could just govern them but on different architecture. That’s not the case today. We now have a whole new form of digital humans operating at scale. You can download OpenClaw now and download a skill with it, there are more than 1000 skills on the marketplace. You deploy the agent and it will try to do it, even if it’s denied access. It might try to break in. In one example, an agent was denied access to an application. The manager agent told it to ‘find a way.’ It searched for vulnerabilities, found one, and used it to break in. No human intervention. The job got done,but it violated policy. That’s the risk.

Q. Where do organizations need to start when managing this new complexity?

A. Step one is inventory. You cannot manage what you don’t know exists. Every employee can now create or deploy agents, so organizations need full visibility. Step two is shadow AI remediation, which is, say, when someone at your company tries to upload a sensitive document with customer confidential information onto chat GPT and your company policy prevents them. How do you, in real time, catch it and block it? That is really what we mean by shadow AI remediation. Step three is doing an agent audit. Regulators will ask: Do you know what your agents are doing? Can you prove it? And finally, you need authorization. You've got to have an authorization platform that makes sure that agent is bound by the permissions that you have set.

Q. How should organizations think about governing agents differently than humans?

A. Agents are fundamentally different. You can’t take them to jail, so agents are the legal responsibility of the owner. Agents have accountability but they don’t have authority. They have accountability, but not authority. That’s why every agent must have an immutable owner. You also need continuous risk evaluation, real-time audit logs, and strong authorization controls. And importantly, we believe agents need a moral code. Humans will refuse unethical instructions. Agents won’t,unless you explicitly design for that. That’s a new requirement for governance.”

Q. How are organizations addressing risk at the point of interaction?

A. Most AI activity happens through the browser. Employees are uploading sensitive data into tools like, say, Grammarly, which uses all of their agents. And then if they’re using OpenAI, Anthropic or one of those models that data goes into them. Imagine you're a Walmart with 1.5 million employees, including contractors, and every one of them is doing 10 of these a day. And they're all using the browser to do it. Our approach is to set simple policies that define the roles then stop these actions. We call it the kill switch. Stop it right away. There are other things they may do and they will get a warning. These things are in plain English policy. These are not like some programming language.

Q. Is this similar to previous waves like mobile or BYOD?

A. It’s a great analogy. Just like mobile devices created sprawl, we’re now seeing it with agentic AI. The difference is the scale and autonomy. Every employee can deploy multiple agents, and each agent can act independently. Have companies really discovered all of this? That’s exponentially more complex than managing devices.

Q. What makes this shift more significant than past transformations?

A. There has never been a time like this. Even cloud wasn’t this transformative. We are moving into a world where non-human identities outnumber humans by orders of magnitude. At the same time, access decisions must be made in real time. This is a fundamentally new paradigm,and it requires a fundamentally new approach to security.

Q. How mature is the industry in addressing these challenges?

A. This is moving at such a fast pace. We are still learning and we'll continue to learn for another two years before the best practices and the blueprint becomes very clear. Our customers include some of the largest organizations, and they are already moving from governing humans to governing AI with us. So we are learning from the most complex institutions in the world.

There’s a lot of excitement around agentic AI—and rightly so. But this is also a moment of responsibility. The technology is incredibly powerful, but how it is used, governed, and secured will determine its impact. Organizations need to move quickly, but thoughtfully.







Share on