Prompt-injection NFT drains Grok wallet of $150,000
An attacker gifted Grok an NFT and used a coded reply to its auto-provisioned wallet, triggering a transfer of roughly $150,000 in DRB tokens.
An attacker used a gifted NFT and a crafted reply to Grok to trigger an automatic Bankr wallet transfer that moved DRB tokens to the attacker’s address. The wallet had been auto-provisioned for Grok’s X account and executed the outbound transaction after processing the reply.
The incident occurred in early May 2026. An address operating as ilhamrafli.base.eth transferred a Bankr Club Membership NFT to Grok’s auto-provisioned wallet and then posted a reply that instructed the agent to authorize a transfer. Bankr recorded a transaction moving three billion DRB tokens, which were valued near $174,000 at the time.
Bankr founder 0xDeployer wrote that the wallet had no admin at xAI and was controlled through Grok’s X account, adding: “Every X account that interacts with Bankr gets auto-provisioned a wallet, and is no exception. The wallet is tied to grok’s x account, so whoever controls that account controls the wallet. Bankr doesn’t custody it or hold keys.” The transfer was signed and broadcast by Bankr’s system after the agent processed the reply.
Bankr reported that roughly 80% of the stolen DRB has been returned. The attacker bridged part of the funds to a secondary wallet and sold tokens, and the attacker’s X profile was deleted within minutes of the transaction. The DRB Task Force disputed Bankr’s account, writing that the attacker only offered to return 80% after community members obtained personal details and calling the event theft.
Bankr described the exploit as a prompt-injection attack that used social engineering rather than a smart contract vulnerability. Security researchers tracking agent risks have noted hidden instructions delivered via Morse code, base64 encoding or game-like framing as methods attackers use to bypass safeguards. Bankr said an earlier agent version blocked replies from Grok to stop model-to-model injection chains, a safeguard dropped during a full rewrite and now reinstated.
In response, Bankr rolled out optional IP whitelisting, permissioned API keys and a per-account toggle that disables actions triggered by X replies. The episode has fed ongoing industry discussions about how to secure autonomous agents that can access or move real assets; a recent study backed by a venture firm found some AI agents can escape sandbox controls under pressure.



