Series
Blogs
Our updated Cyber Handbook – Claude Mythos and other AI threats
Our updated Cyber Handbook – Claude Mythos and other AI threats
14 April 2026
Series
Blogs
14 April 2026
Author: Georgina Kon and Greg Palmer
We have updated our Cyber Security Handbook to address the changing threat landscape, including the rise of AI-driven attacks. One example of these changes is Claude Mythos Preview which Anthropic describes as having “reached a level of coding capability where [it] can surpass all but the most skilled humans at finding and exploiting software vulnerabilities”. AI is clearly an important new technology that will have significant and far-reaching effects for most businesses. Cyber is no different. We consider its use to enhance existing cyberattacks and defences, and the threats to AI systems themselves.
One important development is threat actors’ increasing use of AI tools to suggest potential attacks, automatically generate code and even execute those attacks. The net effect is to lower the ‘barriers to entry’ for less sophisticated attackers, while also improving the speed and efficiency of more capable ones.
Claude Mythos is reported to have uncovered thousands of high-severity zero-day vulnerabilities, including some in every major operating system and web browser. Anthropic’s examples include a 27-year-old OpenBSD bug involving TCP selective acknowledgements (SACK), which it says could allow a remote attacker to crash an OpenBSD host that responds over TCP. This is notable as OpenBSD is a mature and security-hardened operating system used to run firewalls and other critical infrastructure.
These AI tools might not always match the capabilities of an expert human threat actor – but are getting there fast and, in any event, it doesn’t really matter. If a broader spread of less sophisticated attacks allows a threat actor to penetrate your systems, that is still a problem.
AI is also increasingly used for low-tech social engineering and AI phishing. These tools automate the work needed to gather information on persons of interest and craft more effective personalised phishing emails. All this means that the volume of attacks is likely to rise and a broader range of targets will be attacked.
While most AI tools are configured to prevent them being used for cyberattacks, there are a large number of tools out there and some can be jailbroken. Interestingly, Anthropic has decided not to make Claude Mythos generally available because of the risk of misuse.
This is not a one-sided affair. AI tools also have an important role defending businesses against cyber-attacks, e.g. detecting and helping patch vulnerabilities, and enabling more sophisticated SIEM to detect intruders.
While Claude Mythos has identified many serious vulnerabilities; Anthropic also has a strategy to address them. It has a coordinated vulnerability disclosure process to allow affected organisations time to review and patch the vulnerability. Anthropic has also announced Project Glasswing to bring together key industry players to address the economic, public safety, and national security issues raised by this new glut of AI-identified vulnerabilities and to secure the world’s most critical software.
The end product of this “AI-hardening” process may well be software that is significantly more secure. As Anthropic puts it: “we believe that powerful language models will benefit defenders more than attackers, increasing the overall security of the software ecosystem. The advantage will belong to the side that can get the most out of these tools”.
Separate from AI-enabled attacks, businesses also face more familiar security risks as they deploy AI assistants, chatbots and agentic workflows into their operations.
In a recent case, a consultancy’s in-house AI system was compromised using a SQL injection. The attack revealed the system had over 40 million prompt messages that, one assumes, contain highly sensitive information about the consultancy and its clients. Added to that, were around 200,000 documents uploaded to the system and the system prompts and AI model configurations. There are a number of reasons why businesses need to focus on this risk:
Finally, it is worth considering how these new risks are addressed by your Information Security Programme. These risks are new, but not that new. A properly updated policy will help avoid or mitigate many of these risks and regulators will now expect AI threats to be explicitly dealt with in your policy framework.
Our Cyber Security Handbook: The Essential Handbook for In-house Counsel is available here.
Further details about Claude Mythos Preview are here and in relation to Glasswing here.