Today, Claude published a blog describing a powerful, new generation AI model that have been honed to excel in cybersecurity red-team tasks:

“Assessing Claude Mythos Preview’s cybersecurity capabilities”
https://red.anthropic.com/2026/mythos-preview/

2026-04-07 – Anthropic Red Team

This blog presents a new (preview) agentic AI model named “Mythos Preview” for cybersecurity vulnerability discoverability which goes beyond what standard Opus 4.6 can do—not only identifying the vulnerabilities at the source level, but actually building and executing exploits to attack software (and digital infrastructure in general). This is part of a greater collaborative project called “Glasswing” (https://www.anthropic.com/glasswing) whose objective is “to secure the world’s most critical software.”

The blog showcases the extremely powerful capabilities of Claude Mythos Preview to discover vulnerabilities through agent-mediated cyberattacks (within their controlled environment). Cool, but dangerous at the same time in the wrong hands. Thankfully, Anthropic is not releasing this tool into the world wild web, but this shows the future possibilities of AI-assisted cyberattacks.

The blog is extremely long with details in some software vulnerabilities identified using this tool. There are many more vulnerabilities that were not disclosed out of ethical responsibility. Read briefly, you will be amazed at what capabilities can be developed with AI. But further down, in the “Suggestions for defenders today”, there are a number of valuable suggestions for software developers and system defenders alike that we need to take into heart today.

Suggestions for defenders today — As we wrote in the Project Glasswing announcement, we do not plan to make Mythos Preview generally available. But there is still a lot that defenders without access to this model can do today.

Use generally-available frontier models to strengthen defenses now. Current frontier models, like Claude Opus 4.6 (and those of other companies), remain extremely competent at finding vulnerabilities, even if they are much less effective at creating exploits.”

Also:

“Think beyond vulnerability finding. Frontier models can also accelerate defensive work in many other aways. For example, they can:

  • Provide a first-round triage to evaluate the correctness and severity of bug reports;
  • De-duplicate bug reports and otherwise help with the triage processes;
  • Assist in writing reproduction steps for vulnerability reports;
  • Write initial patch proposals for bug reports;
  • Analyze cloud environments for misconfigurations;
  • Aid engineers in reviewing pull requests for security bugs;
  • Accelerate migrations from legacy systems to more secure ones;

These approaches, along with many others, are all important steps to help defenders keep pace. To summarize: it is worth experimenting with language models for all security tasks you are doing manually today.”

You can infer that—if the defense mechanism can be so powerful with AI—the future of cyberattacks can be more insidious and prolific when cybercriminals leverage AI for their own attacks, which they already did. The future of AI cyber warfare is sure to be more malignant than what it is today. This also means that starting today, building secure-by-design software and digital infrastructure is a MUST.