OpenAI launches Daybreak to find and patch vulnerabilities

OpenAI introduced Daybreak, an AI system using GPT-5.5 and Codex Security to map attack paths, test vulnerabilities in isolated environments and propose fixes; access is limited.

OpenAI has launched Daybreak, an AI cybersecurity system that combines GPT-5.5 models with Codex Security to map realistic attack paths in a codebase, test vulnerabilities in isolated environments and propose patches. Access to the tool is limited and available by request or through OpenAI’s sales team.

Daybreak builds editable threat models for repositories, highlights high-impact code and plausible attack vectors, then runs contained tests to verify vulnerabilities and validate fixes. The company said the service integrates detection, patch validation, dependency risk analysis and remediation guidance into developers’ workflows.

The system uses three model configurations: a standard GPT-5.5 with general safeguards; GPT-5.5 with Trusted Access for Cyber, intended for verified defensive work in authorized environments; and GPT-5.5-Cyber, a permissive variant for red teaming, penetration testing and controlled validation. OpenAI described Trusted Access for Cyber as a framework that several major security vendors are adding to their products.

Akamai, Cisco, Cloudflare, CrowdStrike, Fortinet, Oracle, Palo Alto Networks and Zscaler are among the companies integrating Trusted Access for Cyber. OpenAI said it is working with industry and government partners to roll out additional cyber-capable models over time.

Security researchers and service providers report that AI tools have shortened the time to find and weaponize software flaws, increasing the volume of vulnerability reports and the speed at which exploits can be developed. In March, HackerOne paused part of its bug bounty program, citing a surge in reports and strain on open-source maintainers.

Security researcher Himanshu Anand wrote: “The 90-day disclosure policy is dead. When 10 unrelated researchers find the same bug in six weeks, and AI can turn a patch diff into a working exploit in 30 minutes, what exactly is the 90-day window protecting? Nobody.”

OpenAI said it will keep access controlled while it works with partners to ensure the models are used for authorized defensive tasks and to expand options for deploying cyber-focused models.

Articles by this author