Skip to content

TechChannels Network:      Whitepaper Library      Webinars         Virtual Events      Research & Reports

×
Cybersecurity

Q&A: Splunk Security Senior Vice President and General Manager John Morgan Expects Humans and AI Agents to Collaborate in the Modern SOC

Teri Robinson

Apr 08, 2026

AI is reshaping the SOC with humans and AI agents ultimately working side by side. The opportunities to relieve alert fatigue and increase efficacy are clear. But those are only some of the benefits.

Tech-Channels caught up with John Morgan, Senior Vice President and General Manager of Splunk Security, at RSAC 2026, where Cisco and Splunk made a flurry of announcements, introducing AI agents and building a framework that will help organizations put the right security controls in place. Just seven weeks into his tenure, Morgan discussed the importance of managing non-human identities, keeping humans in the loop and reimagining work.

Q. What concerns you about AI?

A. What are jobs going to look like? What is the world going to look like? How do we prove value? Who is going to own the data? It feels like whoever can edit the data, alter the data, and who are training the AI are somehow going to get higher in the hierarchy, in power.

Q. Let’s talk about the SOC with AI in play. What does the modern-day SOC look like going forward?

A. It's going to be an evolution, as we might all anticipate. The industry has been on a modernization and “automate what you can” track for many years. AI agents are just like a step function in that ability to automate. And wherever we can automate, we're going to continue to do that. Agents are going into a SOC. Humans are going to be setting policy, deploying agents, then human analysts and agents are going to be collaborating side by side. The idea is that’s going to increase your coverage, because organizations that are dropping alerts on the floor have better efficacy. We’d hope that it's going to alleviate the analyst burnout that's occurring.

Q. Agents are goal-oriented, though, and there seems to be considerable fear they will just go off on their own. That could be dangerous in a SOC.

A. This collaboration of sorts between a human analyst and a set of agents. Agents can have a mind of their own, or they're unpredictable or not deterministic, and that's really important in the security operations center. You might be able to argue that would be okay in certain low-risk areas, but when you're in a security operation center, you have to ensure that you have control. That's what I call separation of duty, which is having agents with very well-defined jobs, and you keep them to those jobs, on a need-to-know basis only. That is an important part of the design but at a high level—reporting, compliance, risk and posture, are all coming together with humans’ aid. Humans have to remain in control.

Q. Does having humans in that equation speak to accountability? Somebody must know that agents are operating the way they’re supposed to and not going beyond that.

A. That is a great point. There are a few things that are going to change in the industry. If you look at heavily regulated industries like healthcare and government, we can detect private, personal information in a HIPAA context, or confidential information leakage. But detecting whether an output of an agent is accurate or on point or wrong without violating any of the known confidentiality issues, knowing what the intent is and that output matches the intent is credible, is a whole other layer of sort of viability that has to be baked into the systems.

Q. Agents are changing the definition of who's an employee, who's an insider, but current security measures are not built for non-human identities. How do organizations manage these new “insiders?”


A. OpenClaw has come out and really changed the dynamic of things, and now anyone can deploy this, depending on their risk profile. There's basically what I would call an AI-trusted governance stack that's being built in the industry. There's no standard for it right now, but that is being built up. When you have that, the ways of working will change. Organizations that can afford it and have the capabilities will build the right controls because they’ll see they can get a competitive advantage.

 

Q. People are so afraid that humans are going to be replaced. But on the other side of that coin, aren't there jobs that are going to be coming up that we didn't imagine before?

A. I have a different view. You can look at a lot of different industries and say, what jobs are going away. I don't know why software development gets a disproportionate [amount of attention regarding job loss]. The pace of ingestion of this will create a quicker switch that needs to be made on how jobs are moving. I don't see it as a net job reduction. I think that there is going to be movement. And I do think the industry has it a little bit wrong right now, especially as we talk about software development. Fundamentally, soon, the industry is going to realize that the young people coming out of college are AI natives. And they actually can make the biggest change the fastest. They won't have the knowledge of the industry, so you'll need to pair them with people that know the business, whether whatever business they're in. We'll realize that those who can control and utilize AI are going to do things and create new jobs that we haven't seen before. The more people can learn the technology and learn how to use it, the more job creation we're going to have that will offset losses. And, we're going to be doing things that we never could have done, that we never could have thought of. Yes, things are going to change. The pain is they may change faster than we've seen in the past.

Q. Speed seems to be the issue people are grappling with the most—that and Agentic AI has everybody, not just those responsible for infrastructure, touching everything.

A. Speed is a concern. But this is why we believe at Cisco we must get these controls and enablement capabilities out there as fast as we can with a level of responsibility, obviously. That will ease the socio and economic issues that may arise if the speed comes out and people don't feel like they can deploy and use it. An important issue, is to get the enablers out quicker, so that they can be utilized, and people can learn how to change their ways of doing things and change their work so they can adapt to it. I think that's really important.

Q. There’s a lot of opportunity, too. Perhaps people won’t get pigeonholed in certain positions—and will be more immune to layouts—and organizations won’t lose the institutional knowledge that workers have because AI allows workers to reskill and upskill more easily.

A. People will have the opportunity to reimagine how they do their work. And it's still their work and it's still their time. It’s sometimes hard in daily lives to take the time to figure out how to do the work day differently. But I hope that everyone at some point reimagines how they do their work.

Going forward, Morgan recommends that people take time to reimagine work and learn how to use AI tools, which are more accessible than they may think. And organizations must move quickly to put in place the right controls to maximize protection of the SOC and other assets.



 

Share on

More News