A new class of prompt injection vulnerabilities collectively dubbed ‘PromptPwnd’ shook up DevOps teams late last year. The exploit allows malicious actors to inject hidden commands into text fields like GitHub issues or pull request descriptions, tricking AI-powered coding assistants into executing those commands during automated build processes.
With many DevOps teams now integrating AI agents into their workflows, these vulnerabilities aren’t something to take lightly. For instance, PromptPwnd can target GitHub Actions that use AI to triage issues, generate code suggestions, or automate documentation. Researchers at Aikido Security found that if an AI agent in a CI/CD workflow natively processes user-provided text as part of its prompt, an attacker can embed malicious instructions in that text. While most reported ‘attacks’ have so far been part of security research and ethical disclosures, PromptPwnd had become an active attack vector in CI/CD supply chains by January 2026.
Fortunately, GitHub and Google were quick to patch the issues, but other AI-in-CI workflows remain exposed. While some require attackers to have collaborator access, others can be triggered—at least in theory—by any external user simply by opening an issue or pull request on a public repo. Given how many public-facing repos and open-source contributions there are, the implications are serious. As a result, the use of AI assistants in development pipelines can become unintended backdoors, unless they are properly sandboxed.
Traditional CI/CD security focuses on controlling code and dependencies, but PromptPwnd revealed a new blind spot in the form of prompts and outputs from AI systems integrated into pipelines. Unlike human-written scripts, AI behavior can be influenced by external text, thus blurring the line of trust in build processes. The potential outcomes of a successful attack include the leakage of confidential data like passwords and API keys, unauthorized code changes, or inserting backdoors—all under the radar of conventional code reviews due to the fact changes are made by the AI after the review.
PromptPwnd is just the latest in a string of prompt injection vulnerabilities that together present one of the severest security threats in the AI era. For software companies with development workflows or product CI using AI (such as code review and test generation), it has never been more important to routinely conduct security audits. As AI becomes ever more deeply entwined in software engineering, security professionals and developers alike must be able to anticipate novel attack modes by updating their threat models and tooling continuously. Doing so will give them a competitive edge in delivering trustworthy, AI-powered solutions.
.png?width=1816&height=566&name=brandmark-design%20(83).png)