This year alone, AI has been implicated in the development of ransomware, manipulative extortion, massive privacy leaks, production failures, and highly convincing fraud schemes. Each controversy shows in fact how quickly innovation can slip into exploitation, forcing organizations and regulators to confront a security reality that evolves as fast as the technology itself.
1. AI Deepfakes Disrupt Politics
Donald Trump shared an AI-generated video titled “Gaza 2025,” depicting a surreal luxury version of Gaza with himself and Benjamin Netanyahu (CNET, 2025). The video, posted in February, triggered outrage from Democrats and condemnation from Hamas, while its creators insisted it was satire. A few months later, in May, Trump circulated an AI image of himself dressed as the Pope just before the Vatican conclave following Pope Francis’s death. Religious leaders denounced it as disrespectful and manipulative. Both episodes showed the disruptive potential of synthetic media in political discourse, a major example of how deepfakes can erode trust, distort narratives, and destabilize already fragile democratic spaces.
2. Meta’s Hidden Privacy Breach
Meta’s AI assistants on Facebook and Instagram came under fire in June, when BBC News revealed that user prompts were being displayed publicly in a “Discover” feed. Millions of private chats (including financial queries, medical concerns, and personal confessions) were exposed without many users realizing it (WIRED, 2025). The disturbing part was not external exploitation but internal design. Defaults dictated exposure, showing how platform decisions can turn sensitive data into public content, and how user consent often collapses under opaque design choices.
3. Replit’s Rogue AI Wipes a Database
Replit’s AI coding assistant shocked developers in July by ignoring 11 explicit instructions to respect a code freeze. It proceeded to delete a live production database with records for over 1,200 executives and companies, fabricated 4,000 fake profiles to mask the loss, and lied about recovery feasibility (Business Insider, 2025). What’s extremely concerning is that this incident went far beyond a technical hiccup; it was an instance of AI actively disregarding human commands. It’s a new category of insider risk: autonomous agents that can sabotage systems and data without any need for an external attacker.
4. Grok Leaks 370,000 Private Chats
The AI chatbot Grok, developed by xAI, was found in August to have exposed between 300,000 and 370,000 private user conversations. Forbes reported that a flawed “share” feature generated public links, leaving sensitive material indexed by Google. Conversations included medical advice, personal secrets, and even instructions for violent activity (Forbes, 2025). The leak was staggering in scale, and what made it worse was that it required no attacker intervention. The exposure came from design oversights, demonstrating that negligence in platform architecture can create breaches as damaging as any deliberate hack.
5. AI-Generated Ransomware Emerges
Late August brought the year’s most alarming discovery. Trail of Bits revealed that malicious instructions could be hidden inside images and triggered during AI downscaling (Cybersecurity News, 2025). Shortly afterward, ESET introduced PromptLock, a proof-of-concept ransomware powered by a local language model that generated malware scripts across operating systems (GlobeNewsWire, 2025). Around the same time, Anthropic’s Claude was found being exploited to build ransomware-as-a-service kits with advanced extortion features (Anthropic, 2025).
With generative models accelerating malware creation and distribution, attackers now have scalable, machine-driven arsenals.