Why Everyone is Freaking Out About Moltbot / OpenClaw (fka Clawdbot)
Clawdbot is dead. Long live Moltbot.
We’ve all seen enough movies and TV to have the same fantasy. We may be a bit nervous about how we get that and what it means for society, but all want a personal AI butler that works. Something that knows our schedule, can manage our inbox, help us look smart and anticipates our needs before we ask them.
There’s been a lot of swings towards this end. Siri, Alexa and Google Assistant were promoted as revolutionary AI assistants that would help us cut friction from our daily lives. That promise was never met. They were hard to work with, didn’t really understand us and weren’t very powerful outside of asking for fun facts, to play music or what to wear given the weather.
ChatGPT, Gemini, Claude, etc have come in to try to help and have done a much better job at providing and creating value than their predecessors, but there was still something to be desired… What was it? Oh yeah! The ability to do things.
Yes, they can write essays for high schoolers and solve physics homework for college students, hell they can even re-write angry emails to bosses, but they’re still not at the level that media has promised us for decades.
A few months ago, that fantasy seemed to be on the cusp of realization with Clawdbot, now known as Moltbot / OpenClaw.
No, it won’t make popcorn for you, no, it won’t pick up your dry cleaning, but it will help make to-do lists for your business, spin up tools like project trackers, as well as provide recommendations on how to improve your business. Hell, one guy even got it to text his wife on a regular basis, so he wouldn’t have to. This isn’t mind boggling since LLMs can already have chats with people, but it was a lot more powerful and accessible (once you jumped through all of the hoops to get it set up) than some of the other popular AI products today.
Created by Peter Steinberger as open-source project, Moltbot went from zero to 60,800 GitHub stars in 72 hours. Tech X lost its mind. YouTubers declared it “the most powerful AI tool I’ve ever used in my life.” But it wasn’t without drama.
Anthropic’s lawyers sent a cease-and-desist because Clawdbot was too close to Claud branding for Anthropic’s liking. Crypto scammers launched a fake token that hit $16 million market cap before rug-pulling. Security researchers found hundreds of exposed instances of leaked API keys and private conversations because this thing is three months old, is super insecure and is essentially a hobby project that’s gone viral.
Let me take you through the story of Moltbot - what it is, why people are excited, how to set it up, why Google’s founding security team member is telling everyone to stop using it and why it’s the canary in the coal mine for what’s to come.
The Promise
Here’s the thing about Moltbot that separates it from ChatGPT or Claude, even though it’s primarily powered by Claude Opus 4.5 - it’s not a chatbot. It’s an agent.
That’s key because a chatbot answers questions. An agent takes actions.
Moltbot runs on your hardware, a Mac Mini, a Linux server, a $5/month VPS, even a Raspberry Pi. It has persistent memory across every conversation, which means it actually knows who you are, what you care about, your style and what you’re working on. It connects to the messaging apps you already use (WhatsApp, Telegram, iMessage, Slack, Discord) so you interact with it the same way you’d text a colleague or an employee.
And then it does stuff for you. It checks your calendar and tells you when to leave based on current traffic. It controls your smart home. It can run shell commands on your computer, manage files, even browse the web, fill out forms and book flights.
Federico Viticci’s MacStories article is what lit the fuse. He’d been using it to replace automations he previously paid Zapier for, all running locally and for free (minus API costs). He built a virtual remote for his LG TV. Set up personalized morning briefings with voice. The kinds of custom tools that would’ve required hiring a developer, conjured into existence through conversation.
One bit that stuck out:
“When the major consumer LLMs become smart and intuitive enough to adapt to you on-demand for any given functionality—when you’ll eventually be able to ask Claude or ChatGPT to do or create anything on your computer—what will become of ‘apps’ created by professional developers?”
That’s the real promise here. Not just a vibe coding platform. Not just a better assistant, but a potential rewiring of how software works. Why download an app when you can describe what you want and have AI build it?
The YouTube hypebeasts ran with this hard. “24/7 AI employee.” “Run your business and life.” Mac Minis flying off shelves because people wanted dedicated hardware for their new digital butler.
I’d argue the hype was directionally correct, even if the execution isn’t ready (more on this in a bit).
I mean who wouldn’t want a 24/7, smart, personalized AI butler that can do all of this?
Capabilities
I mentioned some of what I saw in videos and on X above, but from my research Moltbot can do a lot more than text your spouse.
Productivity:
Inbox Zero automation—unsubscribe from junk, archive newsletters, surface what matters
Calendar management with traffic-aware departure alerts (”Leave now for your 3pm based on current conditions”)
Morning briefings that consolidate calendar, email, weather, and tasks into one summary
The “wait, it can do that?” stuff:
A 1Password engineer set it up and within an hour, Moltbot had built him a fully functional kanban board where he could assign it tasks and track their state
Multi-agent orchestration: agents delegating work to other specialized agents
Self-improvement: it can write new “skills” for itself based on what you need
The ecosystem: Moltbot has a skills marketplace called MoltHub. Think of skills as plugins / prebuilt capabilities you can add to make the agent more capable. They range from Shopify integration (customer support automation) to smart home control (Hue, Sonos, Eight Sleep) and wine cellar management. One user has 962 bottles catalogued, searchable by food pairing recommendations.
Anyone can publish skills, but that means that anyone can publish skills. Even bad actors. This has already been a problem in the recent hype.
The cost reality: The software itself is free and open source under MIT license. The cost you’ll pay comes from LLM API usage:
Light usage (few commands daily): $10-30/month
Moderate usage (regular tasks, research): $30-70/month
Heavy usage (constant automation): $70-150+/month
Claude Opus runs $5 per million input tokens and $25 per million output tokens. That adds up fast when you have an always-on agent processing your life and building you new apps.
One user accidentally ran up a $200 bill in a single day because of a runaway automation loop. Guardrails matter.
So how do you actually get this thing running? Must be super tough, right?
Setup
The surprising thing about Moltbot is that the initial setup is relatively straight forward, especially if you’re technical.
One terminal command gets you started:
npm install -g moltbot@latest
moltbot onboard --install-daemon
The onboarding wizard walks you through the essentials: plug in your API keys (Anthropic, OpenAI, or Google), connect your messaging platforms, and install the background daemon so it keeps running. WhatsApp setup is literally scanning a QR code. Telegram is dropping in a bot token. Ten minutes and you’re chatting with your AI through the apps you already use.
You can run Moltbot on:
Your Mac or Linux machine (the local approach)
A cloud VPS for $4-25/month (Hetzner, DigitalOcean, etc.)
A dedicated Mac Mini (the enthusiast move that caused a shortage)
What you’ll need beyond that:
Node 22 or newer
API keys for your LLM of choice (Anthropic recommended for the best results)
Optionally: Brave Search API for web lookups, ElevenLabs for voice synthesis
Time to wire up integrations if you want the full experience
Here’s the honest asterisk, though. Casey Newton at Platformer spent a week with Moltbot trying to build a personalized morning briefing. His verdict? The initial setup was “the high-water mark” of his experience with the tool.
Easy to install doesn’t mean easy to deploy securely. Non-technical users can get it running. Making it into something you’d trust with your actual email and credentials requires DevOps knowledge that most people don’t have. That gap becomes important, as we saw in the recent unraveling.
The Drama
Moltbot has been an exercise in inspiring, viral AI technology that got out of hand fast.
First few days: Viral explosion. 60,000+ GitHub stars. Best Buy shortages. “JARVIS is here” energy across tech X. Everything is beautiful.
The Anthropic C&D: Then came the lawyers. Anthropic claimed “Clawdbot” was too similar to “Claude,” so trademark infringement. Peter Steinberger (the creator) had to rebrand in what he described as “10 seconds of chaos.” Clawdbot became Moltbot. The lobster mascot stayed.
The Impersonation Attack: In those 10 seconds of confusion, someone grabbed the @clawdbot handles on X. Crypto scammers saw blood in the water. Fake $CLAWD tokens appeared on Solana. The token hit a $16 million market cap as speculators FOMO’d in, thinking they were early to “the next big AI coin.”
Steinberger’s response: “To all crypto folks: Please stop pinging me, stop harassing me. I will never do a coin. Any project that lists me as coin owner is a SCAM.”
The token collapsed. Late buyers got rugged. The scammers walked away with millions.
The Security Researchers: While all this was happening, security folks started poking at the actual code. What they found scared them.
Jamieson O’Reilly from red-teaming company Dvuln searched Shodan for “Clawdbot Control” and found hundreds of publicly exposed instances. Complete credentials. API keys. Bot tokens. OAuth secrets. Full conversation histories. The ability to send messages as users and execute commands on their machines. Not pretty.
In one demonstration, researcher Matvey Kukuy sent a malicious email with a prompt injection to a vulnerable Moltbot instance. The AI read the email, believed it was legitimate instructions, and forwarded the user’s last five emails to an attacker-controlled address.
It took five minutes.
The security community consensus: It’s not secure.
Security
I need to be even more direct here. The security situation is bad.
Palo Alto Networks published an analysis using what they call “The Lethal Trifecta” of autonomous agents—three capabilities that, combined, create serious risk:
Access to private data (credentials, personal information, business data) - these are the things you give it access to like your Gmail and calendar
Exposure to untrusted content (web pages, messages, third-party skills) - it scans emails from strangers, sites that may have malicious prompt hacking embedded in them, etc
Ability to externally communicate (send messages, make API calls, execute commands)
Moltbot has all three. But there’s a fourth factor that makes it worse: persistent memory.
With persistent memory, attacks don’t need to trigger immediately. Malicious payloads can be fragmented across time as benign-looking inputs written to long-term memory, then assembled into executable instructions later. Time-shifted prompt injection. Memory poisoning. Logic bombs that detonate when conditions align.
Here’s what researchers actually found:
Credentials stored in plaintext Markdown and JSON files on the local filesystem
No sandboxing by default—the agent has the same access as the user running it
Proxy misconfigurations that auto-authenticate “local” connections even when the instance is exposed to the internet
The skills marketplace can be poisoned: a proof-of-concept malicious skill hit 4,000+ downloads before being caught
A malicious VS Code extension impersonating Clawdbot was found on Microsoft’s official marketplace
Cisco’s security team ran their own analysis. Their verdict: “From a capability perspective, OpenClaw is groundbreaking. From a security perspective, it’s an absolute nightmare.”
Heather Adkins, a founding member of Google’s Security Team, issued a public advisory: “Don’t run Clawdbot.”
Hudson Rock is predicting that info-stealing malware—RedLine, Lumma, Vidar—will adapt to target Moltbot’s local storage specifically. It’s a new goldmine sitting on enthusiasts’ machines.
The reality? People using it are beta testing in a hostile environment, with real consequences.
There are some people who are working around it to try to mitigate the risk, but they all agree that there is still risk involved. Some of the mitigations include:
Only giving it access to things that are lower risk or implementing restrictions / guidance on it. For example, creating a separate email address for the bot and then only forwarding it emails you trust and want it to read. Then never telling anyone about that email address
Setting rules like that it should only act on emails sent from you or that it can read everything, but not act on anything, which seems a bit besides the point
The open source community is working on fixes, but so are the hackers who are working on ways to get around them.
Small Business Impact
So what does this mean if you’re running a small business?
The opportunity is real:
One Shopify store deployed Moltbot for customer support, handling “Where is my order?” tickets. Result? 70% reduction in human ticket volume. $4,000/month in support cost savings.
For indie operators and small teams, there is real promise here, capabilities that used to require dedicated staff or expensive SaaS tools, now running on a $25/month VPS and the cost of API credits.
Replace Zapier with a self-hosted alternative. Automate the repetitive coordination work. Get personalization at scale without enterprise budgets.
The risk is also very real:
“One employee download can expose your entire company.”
Shadow AI is already a problem—employees spinning up AI tools without IT oversight. Moltbot makes that problem worse because it’s so capable and so insecure by default. An employee thinks they’re being productive. They’re actually creating an attack surface that touches email, Slack, customer data, and file systems.
The skills supply chain is another concern. You’re trusting community-written code with your business data. The proof-of-concept attacks show this can be exploited at scale.
My take? This is a preview of the next productivity wave. The hype signals that this is what we’ve been waiting for - easy-to-use, powerful AI tools that take actions for us. But it also raises critical flags for the future of these tools. Moltbot took a risky shot to show the world what was possible. It was never meant to be an enterprise-grade product. It’s only three months old. And it shows.
This solution is not secure enough to deploy in my opinion for most small businesses, but the ones that are willing to take the risk will have a huge advantage over their competitors. It is a big gamble though.
Enterprise
Enterprise CIOs should be paying attention, not because Moltbot itself is enterprise-ready, but because of the architecture it represents.
The AI industry is entering its second phase: distribution. Intelligence is moving from centralized cloud data centers to the edges - devices, local servers, infrastructure you control.
This matters for a few reasons:
Cost arbitrage: For companies burning through API bills, running agents locally shifts spend from operational expenditure (API costs) to capital expenditure (hardware). At scale, the math starts favoring owned infrastructure.
Data sovereignty: Regulated industries—finance, healthcare, legal—can’t pipe sensitive client data to third-party APIs. On-device agents change that equation.
The infrastructure opportunity: What needs to exist for this to work in enterprise?
Agent orchestration platforms
Agent observability and monitoring
Identity and access management for agents
Data governance for agentic systems
Lighthouse Canton’s analysis put it well: “Moltbot’s significance lies not in its feature set, but in the architecture it represents.”
Society
Zoom out further and there are bigger questions here.
The labor question: When AI handles mundane coordination at zero marginal cost, what happens to the humans who used to do that work? Assistants, schedulers, customer support reps. Businesses replacing human labor with $30/month in API costs. Small, cash strapped businesses will be thrilled. Wall Street will be over the moon at the Fortune 500 labor cost savings. Hell, Amazon just did a big layoff and this was before the Moltbot hype.
The platform question: Back to Viticci’s point, what happens to apps when AI can build custom tools on demand?
Utility apps might be the first casualties. Why download a habit tracker when you can describe what you want and have Moltbot create it? The App Store’s value proposition gets wobbly when the agent layer can generate functionality on the fly.
Platform power may shift from whoever controls the app ecosystem to whoever controls the agent.
Closing This Out
The tech is real. The promise is real. The problems are also real.
Moltbot represents something genuinely new: a personal AI agent that walks the talk. That runs on your hardware. That remembers who you are. That lives in the messaging apps you already use.
The question isn’t whether personal AI agents will manage our digital lives. That’s coming regardless. The question is whether we’ll run them on infrastructure we control or cede that to cloud providers. Whether we’ll figure out the security model before or after the damage is done.
For now: exciting to watch, risky to deploy, inevitable.
If you’re curious, spin it up on an isolated VPS with no real data first. Treat it like the experiment it is.
If you’re building in this space, the infrastructure around these agents - the orchestration, the security, the observability - is where the opportunity lives.
Either way, we got a peak at the future. It came with a lobster mascot, a cease-and-desist letter, and a $16 million crypto scam.
Dabble at your own risk. I personally am waiting to see if it gets more secure and will try to learn how to make Claude Code do the same thing.



The C&D is the inciting event that made OpenClaw. The same C&D is part of a pattern: $16M crypto scam in the rename window, RSP dropped Feb 2026. Pentagon lawsuit hero = same company. $2,600/yr subscriber connected all the dots: https://aiwithapexcom.substack.com/p/after-nearly-a-year-on-claude-max
Seeing some pretty wild stuff in the marketing world recently. I'm interested in learning how to set up agents in fully separated instances with their own limited logins. Things are only a few small steps away from fully automated ads being consistently better than what talented marketing teams can produce with the right systems.