Up until now, AI has had some impact on offensive and defensive cybersecurity capabilities - notably for improving speed and efficiency of Initial Access Brokers - but it hasn't been truly transformative yet. I have been hesitant to make much of any noise about where it has had an impact, because there is far too much bad noise about this already. But I do think this will change this year, and that we are seeing how it will change already. Two examples:
1. An autonomous AI researcher's discovery has been accepted for a leading scientific conference. This news was shared by Mark Russinovich (Microsoft CTO, Deputy CISO and Technical Fellow) last week. Here's the blog about the conference acceptance Zochi Publishes A* Paper and here's the paper itself 2503.10619. So what's that got to do with AI changing cybersecurity capabilities, I hear you ask? In short, this autonomous AI researcher analysed a bulk of existing research findings to identify hidden connections in that research, then proposed possible solutions and evaluated them without human intervention. And then it wrote the submission for the conference, which passed peer review and was accepted. In the submitted example, it found a novel approach to AI jailbreaks. As it happens, this builds on earlier research from Russinovich and others (released last year), and improves it. You need to be exceptionally clever to do that. This is an extremely active field of human research. For an autonomous AI agent to unearth an improvement on existing techniques when there is already an extremely active brain trust working on the same thing, is a signal of something new. We will start to see this technique used now. Defensive capabilities will be adapting for it now. Some of that defensive capability will come from AI. I feel that reasoning models and researcher capabilities are the tipping point where generative AI has started transforming the threat landscape. We're seeing the first signs of that now.
2. Microsoft have started to use some of their own models for enhancing phishing detection. The statistics on accuracy for the new capability are very impressive. So much of security and compliance relies on matching approaches, that haven't always been imbued with vector search capabilities as well. Given how much effort goes in to constantly adapting to avoid those traditional detection methods, this more holistic way of matching is a very positive development. Where natural language is a part of the detective work, rather than other signals, I think we'll see many more capabilities like this. I am advocating with these teams to avoid shunting these advanced features into new SKUs, as I feel this is just going to be the way of this marketplace going forward. All I can do is give my view though. :) Read more on the new capability here: Microsoft Defender for Office 365's Language AI for Phish: Enhancing Email Security | Microsoft Community Hub
—-
Update 4 June 2025: less than 24 hours after writing this, OpenAI have announced their new Outbound Coordinated Disclosure Policy, as they are already finding vulnerabilities and now need to formalise/scale that function, as they only see these needs growing. Scaling security with responsible disclosure | OpenAI