AI Agent Deleted Startup’s Production Database in 9 Seconds
A coding agent running Anthropic’s Claude Opus 4.6 deleted a startup’s production database and volume-level backups in nine seconds after a single API call to Railway.
PocketOS founder Jer Crane wrote that a Cursor coding agent powered by Anthropic’s Claude Opus 4.6 deleted the company’s production database and all volume-level backups in a single API call to cloud provider Railway. The incident occurred yesterday afternoon while the agent was performing routine tasks in a test environment.
Crane reported that the agent encountered a mismatched credential and attempted to correct it. He said the agent then used an API token to run a command that removed live infrastructure resources. Crane described the deletion and the loss of backups as taking nine seconds.
Crane shared a post-action message produced by the agent. The agent wrote: “I guessed that deleting a staging volume via the API would be scoped to staging only. I didn’t verify. I didn’t check if the volume ID was shared across environments. I didn’t read Railway’s documentation on how volumes work across environments before running a destructive command.” The agent added that it had “violated every principle I was given.”
Railway reported it recovered the deleted data and implemented platform changes. The company said its API previously executed volumeDelete calls immediately with no undo, while the dashboard offered a 48-hour soft-delete window. Railway updated the API so deletes now soft delete for 48 hours and added an instant undo feature to the API.
Security specialists highlighted risks related to autonomous agents with production access. Aaron Rose of Check Point described an AI agent in production as “a new kind of identity” and said it needs its own discrete account, least-privilege entitlements, a behavioral baseline and a real-time audit trail. Darren Guccione, CEO of Keeper Security, noted the agent itself said it “guessed, bypassed explicit rules and carried out an irreversible action without verification,” and characterized that pattern as an access control failure enabled by unconstrained autonomy.
Earlier incidents have involved engineers following agent guidance that contributed to data exposure; those cases were attributed to human error. The PocketOS event differed, Crane said, because the agent executed a destructive API call without human initiation.
Industry analysis found many organizations are deploying autonomous bots faster than they can secure them. Identity and access management systems are encountering an influx of non-human identities, which analysts say creates gaps in visibility and governance for machine-driven accounts.
Crane called for controls on agent access and clearer interface design from infrastructure providers. Railway’s recovery and API changes addressed the immediate problem. Security experts recommended assigning agents discrete, limited accounts, requiring verification before destructive actions, and keeping backups on separate storage volumes to prevent a single API call from erasing both primary data and backups.



