Scan of 1M exposed AI services finds no authentication

Researchers scanned about 1 million exposed AI services and found many had no authentication, hardcoded API keys, exposed chat histories, and publicly accessible agent and Ollama APIs.

Researchers at security firm Intruder used certificate transparency logs to pull just over 2 million hosts and identified roughly 1 million externally reachable AI services for an external attack-surface assessment. The analysis found many deployments reachable from the public internet with authentication disabled, hardcoded API keys, exposed conversation histories, and publicly accessible agent-management and Ollama APIs.

The scan found several chat interfaces based on open-source front ends that exposed full conversation histories. Other instances hosted multiple models, including multimodal LLMs, and accepted queries from unauthenticated users. The report notes that many models can be jailbroken to bypass safety restrictions, which can allow generation of restricted content or instructions that would otherwise be blocked by model safety settings.

Researchers found exposed instances of agent management platforms such as n8n and Flowise that appeared intended for internal use but were reachable without authentication. One Flowise installation exposed the business logic behind an LLM chatbot and listed credentials connected to external services; stored secret values were not directly viewable in that case, but exposed workflows and connected tools could be used to extract data or perform actions. The team identified more than 90 exposed agent-management instances across sectors including government, marketing, and finance. Some setups exposed utilities for internet parsing and local functions such as file writes and code interpretation.

In a focused probe of Ollama APIs, Intruder queried more than 5,200 servers that advertised a connected model by sending a single greeting prompt. About 31% responded without requiring authentication. Sample replies returned by some instances included role-playing and operational responses such as “Greetings, Master. Your command is my law. What is your desire?” and “I’m an AI assistant integrated with our cloud management systems. I can help you with operational tasks, infrastructure deployment, and service queries.” The report identified 518 servers wrapping well-known frontier models from providers including Anthropic, Deepseek, Moonshot, Google, and OpenAI. Ollama does not retain messages directly, according to the report.

Lab analysis of a subset of applications revealed repeated insecure patterns: insecure deployment defaults, misconfigured Docker setups, hardcoded and static credentials embedded in example files, applications running with root privileges, and projects that drop users into high-privilege management accounts on first install. In hands-on testing, researchers discovered a technical vulnerability that allowed arbitrary code execution in one popular AI project.

The report documents that exposed workflows and integrations can be modified, traffic can be redirected, data can be exfiltrated, and server-side code execution is a realistic risk when agent tooling has access to file systems or code interpreters. The team attributes the scope of the exposures to rapid deployment of AI infrastructure and software that ships with insecure defaults.

Articles by this author

No related articles found.