In what has become a disturbingly common error among AI users, the acting head of America’s cyber defense agency last summer uploaded sensitive government documents into a public version of ChatGPT.
Madhu Gottumukkala, interim director of the Cybersecurity and Infrastructure Security Agency (CISA), uploaded contracting documents marked “For Official Use Only” into ChatGPT last summer (Politico, 2026). Automated security systems flagged the activity. Multiple alerts were triggered. A Department of Homeland Security–level damage assessment followed. The documents were not classified, but they were explicitly restricted from public disclosure.
But what makes the incident more revealing is the context. At the time, ChatGPT was blocked for most DHS employees. Gottumukkala requested and received a special exception shortly after arriving at CISA. Other AI tools approved within DHS, such as DHSChat, are architected to prevent data from leaving federal networks. Public ChatGPT is not. That distinction is well understood in government cybersecurity. Public AI systems generally retain user inputs unless governed by enterprise agreements. Uploading sensitive operational material to them creates a data-exposure risk that agencies have spent years warning about.
CISA’s internal sensors quickly detected the uploads. Alerts fired within days. Senior officials across DHS, including legal and information security leadership, reviewed the scope and potential impact of the exposure.
From a technical standpoint, the system functioned as intended. Detection worked. Escalation worked. The failure occurred at the point of human decision-making. This pattern is increasingly common across regulated environments. AI tools reduce friction. They reward speed and convenience. Governance frameworks, by contrast, still assume deliberation, restraint, and procedural memory. The gap between the two is where risk accumulates.
FOUO material sits in a grey zone that is often underestimated. It is not classified, but it is sensitive by design: contracts, internal assessments, operational details. This is the kind of information adversaries aggregate, correlate, and exploit over time.
Federal employees receive recurring training on handling such material. DHS policy requires investigation into cause, impact, and corrective action when exposures occur, with outcomes ranging from retraining to clearance review. What remains unclear is whether any consequences followed this incident. That uncertainty points to a deeper institutional issue: accountability asymmetry. When senior leadership crosses a line, enforcement becomes discretionary. When junior staff do, it is procedural.
This episode fits into a wider trend across governments and critical infrastructure sectors. Institutions are under pressure to demonstrate AI adoption, often driven by executive mandates emphasizing innovation and competitiveness. At the same time, practical rules around data handling, AI usage boundaries, and exception management lag behind reality.
CISA, along with NIST and other federal bodies, has published guidance warning of unmanaged AI use and data exfiltration risks. Blocking public AI tools by default was meant to be a safeguard, with exceptions rare, narrow, and temporary.
What this case shows is how fragile that model becomes when exceptions are normalized at the top, making rules feel flexible and guardrails seem optional. Intent starts to matter more than procedure.
For adversaries, this is valuable information.
.png?width=1816&height=566&name=brandmark-design%20(83).png)