News
Feb 22, 2026
Origen Secures $50 Million Strategic Investment from Bluefive Capital to Scale Abu Dhabi AI Solutions
News
Enterprise
Artificial Intelligence
Americas
NewDecoded
3 min read
Image by Anthropic
Anthropic announced the launch of Claude Code Security on February 20, 2026. This new tool is designed to identify complex software vulnerabilities and suggest targeted patches for human review. It is currently available as a limited research preview for Enterprise and Team customers as detailed in the official Anthropic News release.
Unlike traditional static analysis tools that rely on rule-based pattern matching, this system uses advanced reasoning to understand business logic and data flows. This allows it to catch subtle flaws like broken access controls that automated testers frequently overlook. The tool reads and reasons about code the way a human security researcher would, tracing how data moves through an entire application.
The capability is built upon Claude Opus 4.6, a model known for its agentic ability to solve complex problems. During internal testing, the model successfully identified over 500 previously unknown vulnerabilities in production open-source code. These bugs had remained undetected for decades despite years of expert human review.
To ensure accuracy, the tool employs a multi-stage verification process where the AI attempts to disprove its own findings. Every suggested fix requires final approval from a human developer before being implemented in the codebase. Validated findings appear in a dashboard with confidence ratings and severity scores to help teams prioritize critical fixes.
Anthropic partnered with the Pacific Northwest National Laboratory to validate these defensive capabilities for critical systems. The goal is to provide defenders with the same intelligence that attackers use to find exploits. This release aims to establish a higher security baseline across the entire software industry.
The release of Claude Code Security signals a fundamental shift in the cybersecurity industry from reactive signature matching to proactive agentic defense. By leveraging a model capable of reasoning about entire repositories at once, Anthropic is challenging the dominance of traditional vendors like CrowdStrike and Okta. This move suggests that the future of security lies in AI-native systems that act as autonomous researchers rather than static filters. As open-source maintainers receive expedited access, the industry may see a significant reduction in long-standing zero-day vulnerabilities across the global software supply chain.