The rise of AI in cyber threats
Over the past few years, there have undoubtedly been significant developments in the AI landscape. These include, more recently, the emergence of agentic AI, where AI agents are able to perform autonomous, multi-step tasks, and multimodal AI, where models are able to process multiple types of data (text/image/video/audio) at once. While these developments have been significant for productivity, the (mis)usage of such tools by malicious actors are reported to be rampant. However, the extent to which attackers are leveraging AI, as suggested by some of these reports, is questionable. On review, it appears that threat actors are leveraging AI to lower the barrier to entry and speed up common workflows, as opposed to deploying fully autonomous and novel attack campaigns.
What the evidence actually shows
One such report, from MIT Sloan and Safe Security in September 2025, suggested that 80% of ransomware attacks are powered by Generative AI. This was criticised by cybersecurity researchers as being egregiously incorrect and laughably disingenuous. To clarify, the report did not state their definition of “powered by AI”, and it did not explain how they concluded a threat actor was “using AI”. Additionally, the report stated that attacks from 2023-2024 period were analysed, yet several of the ransomware groups they identified as “definitely using AI” ceased operations prior to 2023, and even in one instance prior to the first GPT model being released. Similarly, other groups were identified as “using AI” where researchers who have been tracking those groups suggest the opposite. As a result of the backlash MIT eventually took down the report, but not before mainstream media outlets picked it up and spread the misinformation to a wider audience. Researcher Kevin Beaumont produced an in-depth review of the report, discussing the inaccuracies and also the likely motivations behind the report – “MIT Sloan are paid by Safe Security, and the principal researcher … sits on Safe Security’s board. Safe Security provide Agentic AI solutions. MIT’s report pitches needing a new approach … linking to Safe Security webpages”. Beaumont even coined the term “cyberslop” for instances like this, where trusted institutions use baseless claims about cyber threats from generative AI to profit.
There have also been reports of “dark LLMs” being sold on cybercriminal forums, such as WormGPT and KawaiiGPT. These are sold as custom GPT models that have no guardrails which come with the likes of mainstream LLM tools, for tens to hundreds of dollars per month. On inspection, however, many of these dark LLMs appear to be underground actors simply selling access to jailbroken instances of mainstream LLMs. Those that are custom have been observed to be underwhelming and lacking in technical depth as they do not bring a “new technological gap or advantage to the fundamental mechanics of the cyberattack process”, but rather still generate outputs from information publicly available on the web, while still being vulnerable to hallucinations. Researchers assess that LLM-generated malware remains immature as a result and is still in the experimentation stage.
Similarly, Recorded Future assessed that most so‑called “AI malware” today is significantly less advanced than headlines suggest, positioning AI primarily as a force multiplier for existing attacker techniques rather than a source of fully autonomous, novel threats. They suggested that that nearly all publicly reported cases fall within early to mid-maturity levels, such as AI‑assisted phishing, code generation, or limited orchestration via cloud‑hosted LLMs, with no confirmed examples of truly autonomous, embedded “bring‑your‑own‑AI” malware operating in the wild. They found that several claims of “first‑ever AI malware” (including MalTerminal, Lamehug, and PromptLock) emerged between mid‑2025 and late‑2025, but most were later shown to be research proof‑of‑concepts, experimental tools, or narrowly scoped implementations rather than true breakthroughs, with no verified examples yet of fully autonomous or embedded AI malware operating independently in the wild.
In another report, released by Claude AI chatbot maker Anthropic in November, Anthropic claimed to have uncovered and disrupted “the first reported AI-orchestrated cyber espionage campaign”, where a Chinese state-sponsored group conducted an attack that was “largely executed without human intervention at scale”. While suggesting a high degree of automation, Anthropic’s threat intelligence team noted that “Claude frequently overstated findings and occasionally fabricated data during autonomous operations”, such as falsifying valid credentials. The report itself also does not provide any evidence of threat actor usage, which lowers credence. This alongside Anthropic being the developers of Claude, and as such are likely to be biased, means the report should not be accepted without scrutiny.
Where AI is making an impact
Conversely, however, we have also seen a significant reduction in time-to-exploit over the past few years, illustrating how threat actors ranging from opportunistic criminals to state‑aligned groups are increasingly exploiting vulnerabilities within hours of disclosure as opposed to the previous days or weeks. This has been partially attributed to the increasing capabilities of AI tools and their high success rates for accelerating exploitation specifically, and crucially, when information is available. Without this information, the AI tools would not be able to develop such exploits.
Overall, what we are seeing when it comes to threat actor AI usage is not a source of novel tactics, techniques, and procedures (TTPs) or central threat but rather a force multiplier to enhance efficiency and lower the barrier to entry, most commonly in social engineering where it can enable more sophisticated phishing lures and deepfake opportunities, meaning while attackers can rapidly create new toolsets, detections and mitigations of such threats exist firmly in the realm of existing TTPs and common behaviours. Meanwhile, we are seeing a growing volume of vendor‑driven “AI attack campaign” reporting, where speculative or weakly evidenced claims are used to market AI security solutions, contributing more noise than signal to the threat landscape.
Further reading: