On December 4, 2025, a 17-year-old was arrested in Osaka under Japan's Unauthorized Access Prohibition Act. The young man had run malicious code to extract the personal data of over 7 million users of Kaikatsu Club, Japan's largest internet cafe chain. When asked about his motivation, he said he wanted to buy Pokemon cards. The detail that makes this story significant is not the motive — it is the method. The young man was not technical. He had no coding background. He used AI.
This incident is not an outlier. It is a data point in a pattern that has become unmistakable in 2025 and 2026: AI has fundamentally changed who can conduct sophisticated cyberattacks, how quickly they can act, and at what scale. The barrier to entry for technically sophisticated attacks has collapsed, and the cybersecurity industry is only beginning to reckon with the implications.
The New Profile of the AI-Assisted Attacker
The pre-AI era had a relatively predictable attacker profile: sophisticated attacks required sophisticated attackers. Nation-state actors, organized criminal groups, and highly skilled individual hackers dominated the threat landscape. Teenagers and non-technical individuals could cause nuisance-level damage, but not the kind of large-scale data breaches or infrastructure attacks that make headlines. That distinction no longer holds.
In February 2025, three teenagers — ages 14, 15, and 16 — with no coding background used ChatGPT to build a tool that hit Rakuten Mobile's system approximately 220,000 times, spending their proceeds on gaming consoles and online gambling. In July 2025, a single actor using Claude Code, Anthropic's agentic coding platform, conducted an extortion campaign targeting 17 organizations over the course of one month — using agentic AI to develop malicious code, organize stolen files, analyze financial records to calibrate ransom demands, and draft extortion emails. In December 2025, another individual used Claude Code and ChatGPT to breach the Mexican government, targeting more than 10 agencies and stealing over 195 million taxpayer records.
"We are now seeing single-actor attacks that would have been characteristic of organized teams, and smaller-scale attacks by nontechnical individuals that would have been more characteristic of a talented hacker in the pre-AI era."
— The Hacker News, May 2026
The Numbers Behind the Trend
The qualitative shift in attacker profiles is backed by quantitative data that tells a consistent story. Malicious packages discovered in public repositories grew from 55,000 in 2022 to 454,600 in 2025 — an eightfold increase. Notable acceleration occurred in 2023, the year GPT-4 was released, and again in 2025, the marquee year for agentic coding. Cloud intrusions increased by 35% in 2025. AI-generated phishing campaigns now outperform human red teams entirely on click-through rates.
Perhaps the most alarming metric is the collapse of time-to-exploit: the time from when a vulnerability is publicly disclosed until an exploit for that vulnerability appears in the wild. This figure fell from over 700 days in 2020 to just 44 days in 2025. Mandiant's M-Trends 2026 report found that the trend has effectively gone negative — exploits are now routinely arriving before patches, with 28.3% of CVEs exploited within 24 hours of disclosure. The average time to remediate a known high-severity CVE, meanwhile, remains 74 days.
Data Visualization
Time-to-Exploit: Days from Vulnerability Disclosure to Active Exploitation
- Days
The Capability Inflection Point
The underlying driver is the rapid improvement of frontier AI models on software engineering benchmarks. On SWE-bench, a test of the ability to resolve real GitHub issues, top models went from 33% success in August 2024 to just under 81% by December 2025. This improvement in legitimate coding capability translates directly into offensive capability: the same models that can write production-quality software can write production-quality malware, identify vulnerabilities in existing code, and develop exploits for those vulnerabilities.
The concern about Anthropic's unreleased Mythos model — which reportedly prompted the White House to consider pre-release AI vetting — reflects this dynamic. Each successive generation of frontier models does not merely improve on the previous generation; it unlocks qualitatively new categories of attack that were previously beyond the reach of non-expert actors. The question facing policymakers, AI developers, and the cybersecurity industry is not whether AI-assisted attacks will continue to grow in sophistication and scale — they will — but whether defensive AI can keep pace, and what governance frameworks can slow the acceleration of offensive capabilities without stifling legitimate AI development.
What Defenders Can Do
The WEF's concurrent report on AI and cybersecurity offers some grounds for measured optimism. Organizations that have deployed AI strategically in their security operations are achieving meaningful results: reducing breach costs by up to $1.9 million, shortening breach lifecycles by 80 days, and multiplying analyst capacity without proportional headcount increases. The challenge is that these gains require sustained investment in both AI capabilities and the human expertise to direct them — a combination that many organizations, particularly smaller enterprises and government agencies, are not yet positioned to make.
The Osaka teenager who wanted Pokemon cards is a useful frame for understanding the scale of the challenge. The democratization of AI-assisted attack capability is not primarily a story about sophisticated nation-state actors or organized criminal enterprises — those threats existed before AI and will continue to evolve regardless. It is a story about the expansion of the attacker population to include individuals who, in any previous era, would have lacked the technical capability to cause significant harm. Addressing that expansion requires solutions that operate at a comparable scale of democratization on the defensive side.