Anthropic Launches Claude Security Public Beta

Anthropic opened a public beta for Claude Security, an AI tool that scans enterprise codebases, flags vulnerabilities with confidence scores, and suggests fixes via Claude Code.

Anthropic has launched a public beta of Claude Security, an AI tool that scans enterprise codebases, flags potential vulnerabilities, and suggests fixes using Claude Code.

The tool can analyze full repositories with a single click, map relationships between components, follow data flows and check whether source code is viable. After a scan, Claude Security returns a list of potential flaws, explains how each issue can be reproduced, provides a severity justification and attaches a confidence percentage for each finding to help teams prioritize work.

When a vulnerability is flagged, users can open Claude Code inside the same interface to generate suggested patches or remediation steps. Anthropic demonstrated a one-click workflow that initiates a full-repository analysis and returns findings in a single session.

Claude Security was previously offered as a research preview under the name Claude Code Security to a limited set of organizations. Anthropic says feedback from hundreds of firms influenced the public beta. The tool is available inside the Claude interface and on a dedicated page on Anthropic’s website. New features include scheduled and targeted scans and options to export results to Slack, Jira, CSV and Markdown files.

Krzysztof Katowicz-Kowalewski, staff product security engineer at Snowflake, reported that early testing of the research preview ‘surfaced novel, high-quality findings that helped us identify and address potential security issues before they could affect our environment or our customers,’ and noted the company plans to expand its use.

Claude Security runs on Anthropic’s Claude Opus 4.7 model, which includes embedded cyber guardrails intended to limit use for high-risk offensive security tasks. Anthropic previously made a more capable cyber model, Claude Mythos Preview, available only to select partners through a program called Project Glasswing, citing safety concerns for not releasing it publicly.

The UK’s AI Security Institute tested Mythos Preview and found it performed better on cyber tasks and completed a 32-step enterprise network attack simulation designed to evaluate exploit capability. The researchers noted that simulations differ from real-world environments because they typically lack proactive human defenders, so results do not prove a model could breach a well-defended, live enterprise network.

Anthropic says the public beta will help enterprise security teams find and fix vulnerabilities faster while the company continues to refine the tool through user feedback. The firm has emphasized validation features such as confidence scores to help reduce false positives and improve the signal-to-noise ratio for security analysts.

Articles by this author

No related articles found.