As AI becomes more embedded in software development and security operations, organizations must handle exponential code generation, faster exploitation timelines, and increasing pressure to move from detection to remediation.
Tech-Channels spoke with Averlon CEO and Co-founder Sunil Gottumukkala and Rajeev Raghunarayan, head of marketing, about how AI is reshaping vulnerability management, why remediation is becoming critical, and how goal-driven AI systems are introducing new risks that organizations are only beginning to understand.
Q. AI has been around for years. What’s different now?
RR: I used to run a product that we used to record all the audio. We would tag who was speaking at a particular point in the audio. We basically gave people the navigation, everything that AI is doing now. We used to do that back then, 15 years ago. But now everyone can use AI to improve work and shorten the time to do it.
Q. What are the biggest themes you’re seeing around AI right now?
SG: Last year, the theme was: if you’re not shipping an agent, you’re not relevant. This year, there isn’t just one standout theme yet—but speed is definitely a big one. Governance also is top of mind. Customers are asking: how do we control these systems?
Q. What problem are you trying to solve?
SG: We started with a simple question: what are the real risks in an organization, and are we staying ahead of them? In large enterprises, even with hundreds of millions invested in security tools, you can still end up with hundreds of thousands—or even a million—findings. Nobody is addressing a million issues. You need systems that can reason about that data and bring it down to a practical number. Our product looks at all the outputs from security tools and assesses exploitability—can this actually be exploited, what’s the impact, and how far can an attacker go?
Q. Why focus on exploitability instead of traditional prioritization?
SG: Some tools prioritize based on metadata or rules, but that only gets you to a surface-level understanding. You need to understand: can this be exploited? Can it reach customer data? Can it take over the cloud? That’s what separates noise from real risk.
Q. How important is remediation in this new environment?
SG: You can’t just prioritize and call it a day—you need to remediate. We generate fixes—configuration changes, code fixes, upgrades—and give them to the user. The human reviews it, makes sure it won’t break anything, and then deploys it. That’s the shift—from identifying problems to eliminating them.
Q. How is AI impacting code quality and risk?
SG: AI is incredibly good at generating code, but it’s goal-driven—it’s trying to complete the task, not produce the most secure code. So, the number of findings and exploits potentially is just growing exponentially. People already have problems with their code and then AI just makes it that much worse. If organizations already struggled with vulnerable code before AI, this just amplifies the problem.”
Q. How is the speed of attacks changing?
RR: The timelines are collapsing. Three years ago, the time from vulnerability disclosure to exploit was under five days. Now, about 25% of vulnerabilities have a proof-of-concept exploit within a day.
SG: And that window is shrinking even further. If something is exploitable, it’s going to be exploited—if not today, then very soon. That changes everything. You can’t rely on “ship now, fix later.” That model doesn’t work in this environment.
Q. Are attacks themselves becoming more autonomous?
SG: Yes. We are starting to see AI-driven attacks behave in more autonomous ways. Once a system is compromised, as in the OpenClaw hack, an AI agent doesn’t need to be explicitly directed—it can look for similar targets and expand the attack surface on its own. We’re even seeing signs of worm-like behavior, where attacks propagate across repositories. It's autonomously now looking for equivalent resources out there then going after them. And the worst thing is the OpenClaw hack actually has worm-like behavior, propagating itself. [Defenders have to figure out] if somebody wrote it and it is malicious. And it’s not fully attributed under law yet. That’s a very different threat model from what we’ve seen before.
Q. What new risks do AI systems introduce?
SG: They’re goal-driven—they don’t have a moral framework. They’re amoral. You can say ‘don’t do this,’ but they’ll still try to find a way around constraints to achieve the goal. The challenge is defining absolute boundaries and enforcing them across all possible variations. That’s why humans are still part of the equation, or need to be, at least for the foreseeable future in this space.
Q. How does this impact vulnerability management and disclosure?
RR: The timelines are compressing so much that traditional disclosure models are breaking down. It changes the whole of vulnerability management and disclosure. If these things can be exploited so quickly, how can you disclose unless you have a patch? Even then, not going to patch the minute one is available. And the ransomware angle is crazy. The attribution is not fully there.
SG: This forces a shift toward proactive detection and remediation instead of reactive patching.
Q. What does the future of security look like?
SG: You can’t take the risk of putting vulnerable code out there anymore—it will get exploited. We’re moving toward systems that automatically identify what matters, generate fixes, and help teams act quickly. It’s about reducing noise, focusing on real risk, and closing the loop between detection and remediation.
As AI continues to accelerate both software development and attack capabilities, the gap between vulnerability and exploitation is shrinking to near real time. For security teams, that means the old playbook—prioritize, patch, and react—is no longer enough. Instead, organizations must embrace systems that can assess risk dynamically, automate remediation, and operate at the same speed as the threats they face. In this new environment, the ability to move from detection to action is still an advantage but is also quickly becoming the baseline for survival.
.png?width=1816&height=566&name=brandmark-design%20(83).png)