Moltbot’s AI ‘Employee’ Dreams Collide with Security Nightmares
Moltbot’s AI ‘Employee’ Dreams Collide with Security Nightmares
Moltbot (formerly Clawdbot) is an AI bot designed to act as a “middleman” between user apps and AI subscriptions, enabling tasks like email management and calendar updates via chat apps such as WhatsApp and Telegram.
The system runs locally or on user-chosen cloud servers, storing data in Markdown while relying on external AI models for its “brainpower.”
Alex Finn, a key figure in the project, described the vision: “You are going to have an AI employee that’s tracking trends for you, building you product, delivering you news, creating you content, running your whole business …”
Low Level, a security researcher, highlighted the risks: “The bigger issue ... is prompt injection. LLMs don’t distinguish very clearly between a user command and just any old data that it feeds.”
Hundreds of misconfigured Moltbot instances have been found exposed on Shodan and Censys, raising concerns about data exposure and control UI vulnerabilities.