Fujitsu is betting that the only way to outpace an autonomous adversary is with an autonomous defense, even more important now that critical vulnerabilities are now exploited within hours of disclosure and large language models can be manipulated with prompts that appear benign but function as hidden payloads.
What’s more, industrial control systems can be destabilized not by shutting them down, but by quietly corrupting the sensor data that drives their decisions.
Fujitsu’s Multi-AI Agent Security platform abandons the monolithic “single AI” model in favor of a team of specialized AI agents, each with a defined operational role (attack, defense, and test) that together form a closed-loop security ecosystem (Fujitsu, 2025).
The System-Protecting AI Agent suite works like an automated war game. The Attack AI models new exploit scenarios as soon as a vulnerability is reported, while the Test AI spins up a cyber twin (an isolated virtual replica of the live environment) to run those exploits without risk to production. The Defense AI then designs and validates countermeasures based on the test results, providing administrators with actionable fixes that don’t require deep security expertise.
This division of labor is deliberate. By decoupling attack simulation, defensive engineering, and testing, Fujitsu maximizes reasoning depth and makes each agent interoperable with a client’s existing systems. In high-complexity networks, modularity is critical: a defense system that can plug into or detach from other infrastructure without disruption is far more deployable at scale.
The other half of Fujitsu’s architecture tackles a newer battlefield: the large language models now embedded in everything from healthcare triage systems to customer service bots. Here, the Generative AI Security Enhancement technology pairs two more AI agents: the LLM Vulnerability Scanner and the LLM Guardrail. The scanner identifies over 7,000 malicious prompt variations across 25 attack categories, probing for weaknesses in both off-the-shelf models, such as GPT -4, and proprietary corporate LLMs. The Guardrail, meanwhile, intercepts and blocks in real-time prompts likely to elicit harmful or noncompliant outputs.
To push the technology beyond lab conditions, Fujitsu is collaborating with Carnegie Mellon University and Ben-Gurion University. CMU’s OpenHands platform underpins the orchestration layer, while Ben-Gurion University’s GeNet technology automates cyber twin generation and LLM role-based access controls. Together, the cross-pollination of AI architecture expertise and deep security research has led to features already influencing deployable frameworks, such as enforcing RBAC (Role-Based Access Control) inside LLMs to reduce the risk of sensitive data leaks based on user privileges.
Traditional rule-based defenses struggle against the sheer novelty of today’s AI-driven threats. In multi-agent offensive operations, attackers might not breach the perimeter at all, instead, they compromise a low-visibility endpoint and pivot through the network or manipulate decision-critical data streams. Fujitsu’s autonomous agents work in parallel to match the adversary’s speed and complexity.