The Chinese GTG-1002 espionage campaign is an AI security wake-up call
Cybersecurity teams are up against an AI onslaught, and the only way to defend will be to fight fire with fire
In September, a Chinese state-sponsored group ran a cyber-espionage campaign where off-the-shelf artificial intelligence tooling was believed to have performed nearly all the tactical work.
According to a report from Anthropic, malicious hackers used Anthropic’s Claude Code AI and other commodity tech to conduct reconnaissance, discover vulnerabilities, write exploits, harvest credentials, move laterally through networks, and exfiltrate data. Human operators intervened at perhaps five or six decision points across the entire operation, it noted. What would have taken elite hackers months was compressed into hours.
This should be a five-alarm fire for every security leader in government and enterprise.
Anthropic described the campaign – dubbed GTG-1002 when it first wrote about it publicly in November – as the first documented case of a cyberattack largely executed without human intervention at scale. The targets included major technology corporations, financial institutions, chemical manufacturers, and government agencies, and some intrusions succeeded.
As destructive as GTG-1002 may have been, what’s as significant here is what this reveals about the future.
The attackers processed thousands of requests per second. Anthropic described this as a tempo physically impossible for human operators. The AI worked autonomously for hours while its human handlers spent perhaps twenty minutes reviewing progress and authorizing escalation. One operator with AI orchestration achieved the output of an entire advanced persistent threat team.
We have spent decades thinking about cyberattacks as campaigns. They have beginnings and ends. Attackers probe, infiltrate, exfiltrate, and disappear. Defenders investigate, patch, and recover. The cadence is episodic, like warfare has always been.
AI agents change the grammar of conflict. They compress time. They scale without adding personnel. They operate at speeds that render human cognition a bottleneck.
Consider what the GTG-1002 attackers actually used.
According to Anthropic, the campaign relied on commodity penetration tools: network scanners, password crackers, exploitation frameworks – all widely available. The sophistication came from orchestration. The AI coordinated these ordinary instruments into an extraordinary capability. As Anthropic noted, cyber capability now derives from orchestrating cheap, available resources rather than from technical innovation.
This accessibility is the alarming part. Traditional nation-state operations require teams of skilled operators, custom malware development, zero-day research, and patient attention over months or years. GTG-1002 demonstrated a different cost structure. A small, AI-literate team with basic hacking knowledge could have executed something similar. The barrier to conducting elite-level operations has collapsed.
The implications extend beyond cybersecurity. The GTG-1002 campaign offers a preview of how AI agents will reshape any domain where speed and scale matter. In areas like intelligence gathering, logistics, and research, the pattern is the same: tasks that once required large teams working over long periods can now be compressed, automated, and run continuously.
The cybersecurity industry has recognized this shift. Companies like Darktrace, CrowdStrike, and SentinelOne already deploy AI-driven threat detection, automated incident response, and continuous monitoring systems.
These defensive capabilities have matured significantly over recent years. Yet GTG-1002 demonstrates that offensive AI has evolved just as rapidly. The concerning development is not that AI entered cybersecurity—that happened years ago—but that state actors now orchestrate AI agents at a scale and sophistication that even advanced defensive systems struggle to match.
Anthropic’s analysis highlighted a critical asymmetry: while defensive AI must protect everywhere, offensive AI needs to succeed only once. The attackers in GTG-1002 exploited this fundamental disadvantage, using AI to probe thousands of potential entry points simultaneously until finding weakness.
This reshapes how we should think about the role of humans in contested systems. In the tactical execution phase, humans are simply too slow. Their reaction times, their need for sleep, their cognitive limits all become liabilities when adversaries operate at processor speed. The successful posture pushes humans up the stack, to the strategic layer where they set policy and define boundaries. The tactical layer belongs to the machines.
This reshapes how we should think about the role of humans in contested systems. In the tactical execution phase, humans are simply too slow. Their reaction times, their need for sleep, their cognitive limits all become liabilities when adversaries operate at processor speed. The successful posture pushes humans up the stack, to the strategic layer where they set policy and define boundaries. The tactical layer belongs to the machines.
The Pentagon and defense contractors have talked for years about integrating AI into military systems. Most of that conversation has centered on discrete applications: better targeting, faster analysis, improved logistics.
GTG-1002 suggests a more fundamental shift. The question facing defense planners is how to build organizations where human judgment governs strategic decisions while AI systems handle the overwhelming volume and velocity of tactical operations—matching the speed and scale demonstrated by our adversaries.
We tend to imagine AI as a tool we pick up and put down, a capability we invoke for specific tasks. Write this email. Summarize this document. Analyze this data. The September attack reveals something different about where AI is heading. The tool is becoming the worker. And unlike human workers, it can run around the clock.
The attackers who executed GTG-1002 understood this. They built a system where human judgment was required only at a handful of strategic moments. Everything else ran on its own.
The age of the AI agent has arrived. The organizations that recognize this, that architect their defenses and their operations around continuous AI processes rather than episodic human efforts, will maintain an advantage. The rest will find themselves outpaced by adversaries who grasped the shift before they did.
Ben Van Roo is the co-founder and CEO of Legion Intelligence, the agentic AI platform solving the DOD’s AI infrastructure problem. Ben has spent his career building tech companies serving the Public and the Private sectors. He spent time as the VP of Supply Chain and Data Science at Chegg, helping grow the company and taking it public, and as Researcher at RAND where he worked with the Department of Defense on supply chain inefficiencies and infrastructure challenges. Ben has a PhD in Operations Research from the University of Wisconsin-Madison.


