Agentic AI can be felt in every facet of security—equally transformational for attackers and defenders alike. That creates challenges and opportunities for security awareness training.
Tech-Channels™ sat down at RSAC 2026 with Roger Grimes, CISO Advisor at KnowBe4, who has journeyed from AI skeptic to advocate. In his 40 years in the computer security industry, Grimes has honed expertise in a wide variety of areas from PKI to identity management and is the author of 15 books. He marvels at the transformation AI is bringing to the threat landscape and security awareness training—and explains why it can be a tougher, more effective instructor than a human.
Q. As Agentic AI colors just about every aspect of business and turns security on its end, what changes is it bringing to training?
A. There is a good chance browsing is going to diminish over time pretty quickly. Already, when people search for something, they only read the top of the browser or the search engine, they're not clicking on the link. That’s changing the way we write and communicate with people. But also, we realized everyone was going to have an AI agent, or agents, on their desktop and we need to reduce the risk of that agent. ne of the things in our recent report talks about there's been a significant increase in phishing calendar requests, where suddenly you’ve got this weird request on your calendar and you don't know why. You’ve got to protect the agent that's looking at that—if you don't train the agent or reduce the risk,how does it know the difference between a malicious calendar request and a real one? So, we no longer just talk about humans. It's workforce, trust management, or it's humans plus the agent use.
Q. How are the outcomes?
A: The data so far is pretty good in that it's showing that if you allow AI agents to train people to select simulated phishing templates, and if someone fails one of those or has a negative interaction with it, then it gives them remedial training. It actually reduces risk of the company three times more than if you had a human admin doing it.
Q. Why is that?
A. As far as we can tell, right now, the agent is picking tougher templates. It seems like the humans will pick an easier template and if someone passes it, they don't increase the difficulty. There's this NIST difficulty rating, NIST phishing scale, from easy to very difficult. And at least the preliminary data shows that the human admins, once they pick, say, a moderate difficulty, they never increase it. The agent, though, says if a person passes a moderate one, let's give them a more difficult option.
Q: AI agents create a bigger attack surface and are clearly being used against people. But how immune are AI agents from attack?
A: It seems like there are going to be a lot of attacks against AI. I came up with a list of 22 different types of attacks against AI. It's kind of wild. We don't have to worry about MS Word attacking us with AI. But AI can be used to attack you. And AI-enabled scams still [bring in] 4.5 times more value. That's pretty significant. We know from our testing that the AI scams are more successful. That means this year we're going to see a massive movement from human-based attackers to AI-enabled attacks. 2026 will be the year that that transition completely happens.
Q. As AI improves, how can the average worker identify deep fakes?
A. At the same time that we're trying to protect the humans against the AI attacks, our deep fake training agent lets you make a deep fake training video of your CEO and push it out like a normal campaign. If someone fails that, they get training. We have to protect and secure the AI agents people are using. If you don't do that, you're not protecting the humans who are not going to be able to tell the difference between a real or fake video. So, you need to focus on the message. Even if it's a deep fake message, they still have to scam you. They still have to create the scenario that's causing you to perform an action that's dangerous. The idea is scammers still have to induce this thing that will harm you or your employer's interest. Focus on that. Are they trying to create this emergency response, this induced panic?
Q. So many people are afraid AI is going to replace people or harm humanity. What do you think?
A. I'm an AI realist. I don't think AI is going to put everybody out of a job. AI can do some great things. But for the first time in my career, 40 years, I think we're going to have something that's going to help protect people, in a general way. AI is going to defeat that. I have great confidence that AI will help get rid of the vast majority of scams.
Q. Can AI live up to its hype? And since so much is unknown should organizations proceed with caution?
A. In my analysis, I looked at what AI does that older software can't do. One of the big things is suck up data and suck up new data that allows it to change. It's autonomous and there are different levels of autonomy. People have to be careful before giving domain admin passwords and saying, “Go, change everything.” There should be some issues there. But inherently, AI does make better software. I went from being kind of a skeptic to being an advocate, at least to say that AI-enabled doesn't mean software is innately better, but it does give it the potential to be better at what it's doing. Being automated with AI, makes it easy for training to be hyper personalized, because it's reading the data of the person. It’s able to consume more data about an organization. It's able to pick up new phishing attacks. We used to have to publish, here's a new phishing attack, make a template. AI is making the phishing template now and saying, “there's this new one.” Maybe there's a new phishing attack, but a person has demonstrated they're not going to be susceptible to that sort of phish, so it's going to phish them in a different way or at an increased level of difficulty.
Q. Creating a security culture is crucial to defense, how does AI play a role?
A. We talk about culture a lot. If you walk into an organization and your brand new employee and you see everybody locking their screen before they go away from the screen, you get your visual clue that this is a place where you should lock your screen before you leave. You want to create this healthy culture where people are making good decisions being nudged in the right direction and get immediate feedback if they do something like go to the wrong website or click on a phishing email. They didn't read the policy; they just saw someone performing the action like reporting a phishing attack. If someone sees their coworker doing that, they're more likely to do it. We are seeing very positive benefits in culture.
Q. Security awareness training sometimes gets sidelined or disparaged as unnecessary.
A. We see some people saying security awareness training didn't work. And we've got data to show that it works, but we don't recommend doing it in a vacuum either. If that's all you're doing is security awareness training, you're not making the best benefit for your employees. Simulated phishing is a big part of training, and that can be an even better benefit than just a training video. And then there are times where you nudge people if they break a policy or do something wrong. The software will help nudge them and remind them of what company policy says—like you can't go to a certain website. If someone starts to install AI, we've got these little nudges that tell them to make sure they’re following policy and not putting in confidential information.
Humans will continue to make the same missteps and will be caught unawares as Agentic AI reshapes the threat landscape and improves phishing and other attack techniques. It is more important than ever for organizations to improve and adopt more robust security awareness training.
.png?width=1816&height=566&name=brandmark-design%20(83).png)