TechTalks
Footer_Arkose-logo-1

On-Demand Webinar | 25 Minutes

Friend or Foe? Building Trust Models for the Agentic AI Ecosystem

The same AI powering your customer service chatbot is now being used to attack your login flows. Agentic capabilities that help real users are also enabling fraudsters to automate account takeover, credential stuffing, and synthetic identity attacks at scale.

In this focused TechTalks session, Arkose Labs experts Kevin Gosschalk & Shimon Modi unpack how to separate AI agents you should trust from those you must stop—revealing the behavioral signatures that distinguish good from bad, why traditional controls fall short, and how adaptive trust models make real-time decisions about AI intent. You'll walk away with practical frameworks you can apply immediately to protect your business without disrupting legitimate users.

Key takeaways include:

  • How agentic AI mimics human behavior to bypass traditional fraud controls.

  • Which signals (device, biometrics, intent) help classify AI agents across good, gray, and malicious cohorts. 

  • Why 74% of agentic AI traffic shows fraud indicators—and what that means for 2026. 

  • A simple three-step framework for governing AI agent access and reducing fraud risk. 

Watch now to learn how adaptive trust models can help your organization stay ahead of rapidly evolving agentic AI threats—without slowing innovation.

Get Access Now

Solange Deschatres TechTalks

Solange Deschâtres

Moderator

TechTalks, an Energize Marketing Company

Kevin Gosschalk Arkose Labs

Kevin Gosschalk

Speaker

Founder and CEO

Arkose Labs

Shimon Modi Arkose Labs

Shimon Modi

Speaker

Senior Vice President of Product

Arkose Labs