This is a really interesting view inside AI-generated obfuscation techniques in a malicious SVG file (sent in a phishing email). The file (a vector image by original design, but increasingly becoming a malware payload container) has all kinds of hidden business data at the top of the file, in case anyone should casually inspect the file details to spot something malicious, but if it was legit, why is it hidden? Later, there's a a large block of delimited business analysis terms in another invisible text block, which we later discover are a type of encoding for malicious words that need to be executed, avoiding detection. Later, a JavaScript function parses those terms and converts each word into the malicious strings that would have been detected, and those converted strings are used in instructions that carry out the phishing attack.🤯 On the flip side of all this, we see that these AI-generated techniques are distinctly... inhuman, presenting new detection opportunities. Earlier this year, I forecast that this would be the year when an offensive/defensive AI arms race would launch in anger. Here we are.
AI vs. AI: Detecting an AI-obfuscated phishing campaign | Microsoft Security Blog