The Weaponization Era: when AI becomes a cyber weapon
Published on October 15, 2025
The cybersecurity landscape has entered a dangerous new phase. Artificial intelligence, once pretended to be our digital protector, has become a double-edged sword — simultaneously strengthening our defenses while arming cybercriminals with unprecedented capabilities.
The numbers don't lie
The statistics offer a sobering picture of our current reality. AI-driven cyberattacks surged by 47% globally in 2025, while 68% of cyber threat analysts report that AI-generated phishing attempts are now harder to detect. Perhaps most alarming: 87% of global organizations faced AI-powered cyberattacks in the past year, with the average cost of an AI-powered data breach reaching $5.72 million.
These aren't distant threats — they're happening now. In 2025, global AI-driven cyberattacks are projected to surpass 28 million incidents, with 14% of major corporate breaches being fully autonomous, requiring no human intervention after the AI launched the attack.
From science fiction to reality
What makes this era particularly dangerous is how AI has democratized sophisticated cyber warfare. Traditional attacks required specialized knowledge and significant time investment. Today, 82.6% of phishing emails use AI technology in some form, and these AI-crafted messages achieve a 78% open rate — a success rate that would terrifies security professionals.
The weaponization extends far beyond simple phishing. 41% of ransomware families now include AI components for adaptive payload delivery, while synthetic media attacks, including deepfakes, grew by 62% year-over-year in 2025.
A few of these numbers come out from here:
- https://sqmagazine.co.uk/ai-cyber-attacks-statistics/
- https://www.aiacceleratorinstitute.com/how-ai-is-redefining-cyber-attack-and-defense-strategies/
- https://www.crowdstrike.com/en-us/blog/crowdstrike-2025-threat-hunting-report-ai-weapon-target/
In the real world
The North Korean Connection
Perhaps no case better illustrates AI weaponization than FAMOUS CHOLLIMA, a North Korean hacking group that has infiltrated over 320 companies in the last 12 months — a 220% year-over-year increase. These operatives use generative AI to create attractive resumes, deploy real-time deepfake (stay tuned we will talk also about that in a future article) technology to mask their true identities in video interviews, and leverage AI code tools to perform their jobs while funding nuclear weapons programs.
The sophistication is incredible. These aren't simple scams — they're multi-layered operations using "laptop farms" in the U.S. to evade geolocation controls, allowing North Korean operatives to work remotely for American companies while remaining physically overseas.
The $25 Million Deepfake Robbery
In what security experts call a watershed moment, a Hong Kong-based Arup employee was tricked into transferring $25.6 million to fraudsters during a video conference call. Every participant in the meeting was a deep-fake AI-generated representation of the company's CFO and colleagues. The employee initially suspected phishing but was convinced by the realistic video call featuring familiar faces and voices.
This wasn't a Hollywood plot. It was a 15-transaction operation across five bank accounts, executed through deepfakes created from publicly available videos of company executives. The fraud was only discovered when the employee later contacted headquarters for confirmation.
Malware is Evolving
BlackMamba represents the new frontier of AI-weaponized malware. This proof-of-concept demonstrates how AI can re-synthesize its keylogging capability every time it executes, making the malicious component truly polymorphic. Every execution creates a unique variant, defeating traditional signature-based detection systems.
BlackMamba's approach: it reaches out to OpenAI's API at runtime to generate malicious code, then executes it in memory using Python's exec() function. To security systems, it appears as benign communication with a high-reputation service.
The Acceleration Factor
WormGPT and FraudGPT have emerged as the "dark ChatGPTs" AI tools specifically designed for cybercrime with no ethical guardrails. These platforms, sold as subscription services for $60-$700 monthly, enable even inexperienced cybercriminals to craft sophisticated phishing campaigns and generate malicious code.[11][12]
The result? A 1,265% increase in AI-driven phishing attempts and credential phishing attacks increasing by 703% in the second half of 2024 alone.[13][3]
Beyond Traditional Boundaries
What distinguishes this era is how AI has eliminated traditional friction in cybercrime. The old security advice — "spot the typo, spot the scam" — is officially obsolete. AI now crafts flawless, hyper-personalized phishing emails that bypass both technical filters and human intuition.[2]
More concerning is the emergence of autonomous AI agents attacking other AI models. These systems can poison synthetic data used to train AI models and tamper with open-source models before public release, creating vulnerabilities that activate only after widespread deployment.[14]
The Response Imperative
The cybersecurity industry recognizes the magnitude of this, in fact the AI cybersecurity market is projected to reach $82.56 billion by 2029, growing at a CAG rate of 28%. However, 68% of cyber threat analysts report that traditional threat intelligence is insufficient against AI-accelerated attacks.
This isn't merely about upgrading tools, it's about fundamentally rethinking cybersecurity, architecture and models as well as AI-powered defenses are no longer optional; they're essential for survival in an environment where 35% of botnet operations now incorporate machine learning algorithms to evade detection in real-time.
The Future Curse
The weaponization of AI represents more than a technological challenge. It's an importantchange that demands immediate action. Organizations can no longer rely on periodic security reviews or traditional perimeter defenses. The new reality requires continuous adaptation, AI-powered detection systems, and human-AI collaboration.
As we navigate this weaponization era, one thing appears clear, that the organizations that adapt quickly will survive. And the question isn't whether AI will continue to be weaponized, it's whether we'll be ready when it is.
🔎 Want to dive deeper into real-world cases, stats, and strategies to defend against AI-powered cyberattacks? https://neverhack.com/en/offers/soc-mssp
https://neverhack.com/en/offers/offensive-security
Raffaele Sarno
Head Pre-sale Manager, NEVERHACK Security Operation Department, Italy

