CERT of Ukraine has identified LLM-generated malware in the form of an email-delivered Python script which generates calls to a Hugging Face API, to use Qwen 2.5-Coder-32B-Instruct to craft commands to gather machine information, gather text and Office files, store them in ProgramData (a privileged location), and exfiltrate the data. What's interesting about this is that the LLM-generated code is likely to be a bit different every time it is run, which presents some challenges with detection. More than anything, it underscores why Shadow AI isn't just an unintentional data loss problem, it's also a malicious data theft problem. Thomas Rocia's NOVA rules project does offer detections for this type of attack, but that project is still in Beta and there is work to be done to define how it can be fully operationalised.
CERT-UA
https://www.linkedin.com/posts/thomas-roccia_cert-ua-published-a-report-on-a-malware-activity-7353408069898252288-kI7T?utm_source=share&utm_medium=member_desktop&rcm=ACoAAC38JMUBzW9m1vYbQFjaUjgd0_ZLI7I_VwU
nova-framework/nova_rules/lamehug_apt_28.nov at main · fr0gger/nova-framework · GitHub