Ukraine warns of AI-powered malware targeting the defence sector
CERT-UA says LAMEHUG malware uses an LLM model to craft commands in real-time and evade detection.

Ukraine’s national cyber authority has issued a warning about what it says is the first known use of large language model-powered malware in active attacks targeting the country’s defence and security sector.
According to CERT-UA, Ukraine’s Computer Emergency Response Team, the so-called “LAMEHUG” malware was deployed in a recent campaign that it first uncovered on 10 July. The agency says it assesses, with moderate confidence, that the attacks are the work of APT28, a state-sponsored hacking group aligned with Russia’s military intelligence agency, the GRU.
APT28, also known as UAC-0001 and Fancy Bear, has been linked to several high-profile espionage and sabotage operations around the world, including a string of attacks on UK defence organisations involved in delivering foreign assistance to Ukraine.
Earlier this month, the UK’s National Cyber Security Centre also formally linked Fancy Bear to a cyber campaign targeting Western logistics and technology sectors using the ‘Authentic Antics’ malware, and sanctioned 18 Russian individuals connected to the attacks.
In the Russia-backed group’s latest campaign observed by CERT-UA, the LAMEHUG malware is delivered through phishing emails disguised as communications from Ukrainian ministries. Once opened, a malicious .pif file triggers the LAMEHUG loader, which then connects to an open-source LLM hosted on Hugging Face’s cloud platform. Using Qwen 2.5-Coder-32B-Instruct, a powerful AI model capable of generating code and commands, to dynamically generate commands.
Unlike traditional malware that relies on pre-programmed instructions, LAMEHUG uses the LLM to gather detailed information about the victim’s computer, including hardware specifications, running processes, and network configurations. It then scans for sensitive documents, such as PDFs, Word files, and spreadsheets, before exfiltrating data via encrypted channels.
What makes LAMEHUG particularly dangerous is its stealth and flexibility. Because the malware is generating commands via a public API, its traffic can be hard to distinguish from legitimate AI use within an organisation. This means traditional antivirus tools and endpoint detection platforms may miss it entirely.
Vitaly Simonovich, a threat intelligence researcher at Cato Networks, warns that this marks a turning point in the evolution of cyber threats, where attackers use off-the-shelf generative AI tools to automate reconnaissance, tailor commands, and potentially adapt in real time without further human intervention.
“The discovery of LAMEHUG by CERT-UA marks a significant milestone in the threat landscape,” he told Resilience Media. “The campaign highlights state-sponsored investment in emerging AI technologies for cyber activities, with Ukraine serving as the testing ground for these new capabilities. The relatively simple implementation suggests this is APT28’s attempt at learning how to weaponise LLMs, likely opening the door for more sophisticated AI-driven campaigns in the future.”
This incident also highlights growing concerns about how open-source AI models, often released with minimal restrictions, could be weaponised. While the AI community continues to debate safety and governance, LAMEHUG may prove to be the first real-world case of an LLM being actively used in a hostile cyber campaign with zero human intervention.
CERT-UA did not specify if LAMEHUG’s execution of the LLM-generated commands was successful, what agencies were targeted, or whether any sensitive data was accessed.