TechChannels Expert Insights

Q&A: Arkose Labs CEO Lays Out Steps for Managing AI Agents

Written by Teri Robinson | Feb 24, 2026 1:00:00 PM

Organizations already struggle to differentiate between legitimate and malicious AI agents, and the situation is only going to get worse as non-human identities (NHIs) proliferate. Yet, despite the growing danger, only a small fraction of executives actually have a strategy for managing NHIs.

Arkose Labs CEO Kevin Gosschalk explains why traditional security measures don’t work against malicious AI agents and the steps organizations should take immediately to protect against those agents while still preserving the benefits for legitimate AI agents.

Q. Why are legitimate AI agents so hard to tell apart from malicious ones?

A. The fundamental challenge is that both legitimate and malicious AI agents exhibit remarkably similar behaviors—they automate tasks, make rapid decisions, and interact with systems at scale. Traditional security measures look for anomalies in patterns, but AI agents are the new pattern. A legitimate customer service bot and a credential-stuffing bot both make numerous API calls, navigate workflows systematically, and adapt to responses. The difference lies in intent, not capability.

What makes this particularly difficult is that malicious agents are specifically designed to mimic legitimate behavior. They can solve CAPTCHAs, exhibit human-like timing variations, rotate through residential proxies, and even engage in seemingly normal browsing patterns before attacking. They're no longer the crude bots of yesterday. They're sophisticated actors that blend into legitimate traffic by design.

Q. You’ve cited Okta's 2025 Identity Security report that said only 10% of executives have strategies for managing non-human identities. Why is that number so low? What are the challenges of creating such a strategy?

A. That 10% figure reveals a critical blind spot in enterprise security. Most organizations have spent decades perfecting human identity management, investing in SSO, MFA, and identity governance. But the explosion of AI agents, API keys, service accounts, and automated systems has caught them flat-footed.

The challenges are threefold. First, visibility: many organizations don't even know how many non-human identities they have operating across their infrastructure. These identities multiply rapidly as teams deploy AI tools, chatbots, and automation without centralized oversight. Second, there's no established framework. Unlike human identity management where standards like OAuth and SAML exist, non-human identity governance is still nascent. Third, there's an operational tension—AI agents need broad permissions to function effectively, but those same permissions create security risks. Organizations struggle to balance enablement with protection.

The timing couldn't be worse. As companies rush to adopt AI agents for competitive advantage, they're inadvertently creating massive attack surfaces.

Q. Could you discuss the challenges that expanding ecosystems create when it comes to managing agentic AI?

A. Modern enterprises operate in interconnected ecosystems where AI agents don't just operate internally—they interact with partner systems, third-party APIs, customer platforms, and supply chain networks. This creates exponential complexity.

Consider a retail company: their AI agents might interact with payment processors, inventory systems, shipping partners, customer service platforms, and marketplace APIs. Each connection point is both a business necessity and a potential vulnerability. When a malicious agent compromises one system, it can pivot across the ecosystem, exploiting trust relationships that were designed for legitimate automation.

The challenge intensifies because these ecosystems blur responsibility boundaries. Who's accountable when an AI agent from your partner's system exhibits suspicious behavior on your platform? How do you distinguish between an authorized agent acting on legitimate business and a compromised agent being used for reconnaissance? Traditional perimeter-based security crumbles when the perimeter includes dozens of interconnected systems, each with its own AI agents.

We're also seeing attackers exploit this complexity through "agent chaining," where they compromise multiple low-privilege agents across an ecosystem to achieve objectives that no single agent could accomplish.

Q. Why don't traditional defenses work against malicious AI agents?

A. Traditional defenses were built for a different threat landscape. Rate limiting assumes attackers will be aggressive and noisy. CAPTCHAs assume bots can't solve complex puzzles. Behavioral analysis assumes malicious actors will deviate from normal patterns. AI agents break all these assumptions.

Modern AI agents can throttle their activity to stay below rate limits, solve visual and logical puzzles, and perfectly mimic legitimate user behavior because they're trained on the same patterns. They can even generate contextually appropriate responses in chat interactions, making them indistinguishable from real users in many scenarios.

More fundamentally, traditional defenses focus on what is happening rather than why. They might flag a high volume of login attempts but struggle when an AI agent makes one carefully crafted attempt every few hours. They can detect known attack signatures but fail against novel techniques that AI agents can generate and iterate on autonomously.

The defense-offense asymmetry has never been starker. Attackers can deploy thousands of AI agents, each learning from failures and adapting strategies in real-time, while defenders are often still using rule-based systems that require manual updates.

Q. What steps should organizations take now to protect themselves against malicious AI agents but preserve the benefits for legitimate AI agents?

A. Organizations need a three-layered approach: visibility, verification, and adaptive enforcement. First, establish comprehensive visibility into all non-human identities. Create an inventory of every AI agent, API key, and service account operating in your environment. This foundational step is often overlooked but absolutely critical—you can't protect what you can't see.

Second, implement continuous verification rather than one-time authentication. Legitimate AI agents should be able to prove their identity and intent throughout their entire session, not just at login. This means analyzing behavioral patterns, validating business context, and checking for anomalies in real-time. Technologies like device intelligence and behavioral biometrics - adapted for AI agents rather than humans - can help distinguish legitimate automation from malicious activity.

Third, deploy adaptive enforcement that adjusts based on risk signals. Instead of binary allow/block decisions, implement graduated responses. A suspicious AI agent might receive additional challenges, rate limits, or restricted access while maintaining service for legitimate users. This preserves business operations while containing threats.

Organizations should also adopt a "zero trust" mindset specifically for AI agents. Just because an agent authenticated successfully doesn't mean it should be trusted indefinitely. Continuous validation, least-privilege access, and network segmentation are essential.

Finally, this isn't just a technology problem. It requires cross-functional collaboration between security, IT, development, and business units. Every team deploying AI agents needs to understand the security implications and follow established governance frameworks.

The organizations that act now will have a significant advantage. Those that wait until after a major breach will find themselves playing expensive catch-up.