AI agents have evolved from simple chatbots into tools capable of executing tasks. Recently, Clawdbot, developed by Peter Steinberger, has gained widespread attention. He created an AI agent that can be remotely controlled via communication apps like WhatsApp. Since its launch, the product has caused a sensation, sparking various discussions—both positive and negative. This article is excerpted from the YouTube video “Please don’t install Clawdbot,” where channel host Alberta lists potential out-of-control behaviors of Clawdbot during task execution and how to prevent Clawdbot from going rogue before use.
A Model for Entrepreneurship That Disrupts Tradition by Developing New Chatbots
Peter is a successful entrepreneur who has founded and sold several companies, leaving with a pocket full of cash. He has openly shared that over the past few years, aside from playing blackjack and soliciting prostitutes, he has had leisure time. He observed that although AI agents are widely discussed, the general public still lacks accessible and truly capable “agent execution” AI tools.
Driven by a sense of fun, Peter developed Clawdbot. He aimed to create a platform that even non-technical users could easily activate. Unlike traditional tools requiring complex commands or terminal operations, Clawdbot allows users to give commands to computers or smartphones through familiar instant messaging apps. This convenience quickly met user needs and even prompted major companies like Anthropic to accelerate the development of similar features.
Clawdbot and Claude Clash Names, Multiple Renames, Now Officially Named Open Claw
After rapidly gaining popularity, Clawdbot faced legal disputes. Because Clawdbot’s name sounds very similar to Claude, an AI model from Anthropic, he had to change the name to avoid IP infringement. Clawdbot was temporarily renamed Moltbot, continuing the lobster (Lobster) theme. It is now officially called Open Claw.
How to Prevent Out-of-Control Tasks by Clawdbot
Many users have found that when using bots to perform tasks, the bots sometimes execute commands out of order. For example, a bot received an instruction to remind the user to buy milk in the morning, and it kept sending reminders every 30 minutes all night. Users exhausted all tokens allocated for task execution in one night. Others have maliciously instructed someone else’s bot to reset a computer. These cases demonstrate that AI agents are highly susceptible to misinterpretation of commands or malicious prompt injections, leading to risks of financial loss and device damage.
Although fully automated AI agents demonstrate high productivity, many industry experts warn that the potential risks of granting AI agents unlimited permissions cannot be ignored. Because Clawdbot requires high-level access rights, malicious operations could lead to privacy leaks, bank account drain, or even sending inappropriate emails that threaten careers. If users still wish to try, Alberta recommends strict isolation measures, such as not installing on primary work computers, opting for cloud hosting and separate backup devices, and setting up dedicated, isolated email accounts for AI agents to prevent access to sensitive main account data.
This article, “Think Twice Before Installing Clawdbot,” highlighting the potential for out-of-control behavior, first appeared on Chain News ABMedia.