ClawdBot/Moltbot: When Viral AI Tools Become Security Nightmares
ClawdBot exploded onto the tech scene in January 2026. Within three days, the open-source AI assistant rocketed to 60,000 GitHub stars. Tech influencers praised it. Developers rushed to buy Mac Minis just to run it. People called it “Jarvis for your phone.”
Then security researchers started digging.
What they found was alarming. Hundreds of exposed systems. Stolen passwords. Private conversations leaked. API keys accessible to anyone with a simple internet search.
This is your wake-up call about viral tech tools.
What Made ClawdBot So Popular
Peter Steinberger built ClawdBot as his personal AI butler. Unlike Siri or Alexa, this assistant ran on your own computer. No corporate servers. No big tech watching your data.
The appeal was instant. ClawdBot worked through apps you already use. WhatsApp. Telegram. Slack. Discord. Text your AI assistant like you text a friend.
But here was the kicker: ClawdBot had real power. You ask it to check your email. It does. You tell it to book a flight. It handles it. You want it to write code. Done.
Total control of your computer. Full access to your files. Browser control. Shell commands. Everything.
People loved the power. They missed the danger.
The Security Disaster Unfolds
Security researcher Jamieson O’Reilly needed about 10 seconds to find the problem.
He opened Shodan, a search engine for internet-connected devices. He typed “ClawdBot Control.” The search returned hundreds of hits.
Each hit was an exposed ClawdBot system with zero protection.
No password. No authentication. Nothing.
O’Reilly accessed complete credentials on these systems:
- API keys for services like Anthropic and OpenAI
- Bot tokens for Telegram and Slack
- OAuth secrets
- Full conversation histories
- The ability to send messages as the user
- Command execution on the computer
One exposed system even had root access. An attacker could control the entire computer.
The problem? Most people set up ClawdBot behind a reverse proxy (a common web server configuration). The default settings made the system think all connections were local and trusted. Your private AI butler became a public terminal for anyone who found it.
The Email Attack: Five Minutes to Total Breach
Matvey Kukuy, CEO of Archestra AI, demonstrated something worse.
He sent a special email to a ClawdBot system. The email contained hidden instructions. Then he asked ClawdBot to check his email.
ClawdBot read the email. Believed the instructions were legitimate. Extracted a private cryptographic key. Sent it to the attacker address.
Total time: five minutes.
This is called prompt injection. You trick the AI into following your commands instead of the owner’s instructions. Because ClawdBot processes emails, web pages, and documents, attackers embedded malicious commands anywhere ClawdBot looked.
Another user tested this attack on his own system. He crafted a malicious email. Asked ClawdBot to summarize his inbox. ClawdBot forwarded five private emails to the attacker address within seconds.
Zero technical exploits needed. Just clever wording.
Why This Tool Was Perfect for Hackers
ClawdBot stores everything in predictable file locations on your computer:
~/.clawdbot/credentials/holds authentication tokens~/.clawdbot/agents/contains API keys~/clawd/MEMORY.mdstores user preferences and habits
Most of these files sit there as readable text. Basic file permissions provide some protection. But if malware gets on your computer, game over.
Malware families like RedLine Stealer and Lumma Stealer already adapted their code. They sweep for ClawdBot files automatically. Your AI assistant becomes a gold mine for credential thieves.
Remember the Change Healthcare ransomware attack? Hackers stole $22 million after finding one compromised VPN credential on an infected computer. If ClawdBot had stored those credentials, the attack would have been even easier.
The Trademark Chaos
As security problems mounted, ClawdBot faced another crisis.
Anthropic (maker of Claude AI) sent a trademark request. The name “Clawd” was too similar to “Claude.” Peter Steinberger had to rebrand immediately.
He renamed the project to Moltbot. Changed the GitHub organization. Updated the Twitter handle.
But here’s where it got messy.
When Steinberger released the old @clawdbot Twitter handle and GitHub organization, crypto scammers grabbed them in approximately 10 seconds. They immediately started promoting fake cryptocurrency tokens.
A fraudulent $CLAWD token launched on Solana. Peak market cap: $16 million. Late buyers lost everything when Steinberger denied involvement and the token crashed to zero.
Fake VS Code extensions appeared. “ClawdBot Agent” looked legitimate on the surface. It provided AI coding help while silently installing remote access tools. The malware connected to attacker servers. Full system control. Persistent access.
The Brutal Reality Check
A formal security audit found 512 vulnerabilities. Eight were classified as critical.
The problems included:
- OAuth tokens stored as plain text
- Hardcoded secrets in source code
- Path traversal vulnerabilities
- Weak CSRF protection
- 263 secrets exposed in Git commit history
One security analyst summarized it bluntly: “Anyone with file access could steal your WhatsApp account. The platform was not designed with security from the beginning.”
Even ClawdBot’s own documentation acknowledged the problem. The FAQ states: “Running an AI agent with shell access on your machine is spicy. There is no perfectly secure setup.”
The Lesson: Popularity Does Not Mean Safe
Here’s what you need to understand.
When a tech tool goes viral, people assume someone checked it. They think thousands of users mean safety. They believe GitHub stars equal quality.
Wrong.
ClawdBot hit 9,000 stars in 24 hours. 60,000 stars in three days. Fastest-growing open-source project in GitHub history.
Security audits came later. Hundreds of exposed systems were already running. Credentials were already stolen. Attackers already had access.
Viral spread happens faster than security review.
How to Protect Yourself
If you use tools like ClawdBot/Moltbot, or you’re considering any self-hosted AI assistant, follow these rules:
Start Small
Never give full system access immediately. Begin with chat-only features. Add capabilities slowly. Understand each risk before expanding permissions.
Lock Down Access
Never expose the control panel to the internet without authentication. Use strong passwords. Enable all security features. Run security audits before deployment.
Treat External Content as Hostile
Any email, web page, or document ClawdBot processes could contain attack instructions. Use reader agents to sanitize untrusted content first.
Rotate Credentials Regularly
Change API keys often. Update passwords. Assume exposure until proven otherwise.
Monitor Behavior
Watch what your AI does. Review logs. Check for unusual activity. Set up alerts for suspicious actions.
Use Latest Models
Smaller, cheaper AI models are more vulnerable to prompt injection. Pay for quality models with stronger security.
The Bigger Picture
ClawdBot represents a new class of security threat: autonomous AI agents with real power.
Traditional security assumes humans make decisions. Humans are predictable. Humans are traceable. Humans operate at human speed.
AI agents operate at machine speed. They make thousands of decisions per second. They access multiple systems simultaneously. They respond to instructions buried in emails or web pages without human review.
One security expert called AI agents “the biggest insider threat of 2026.”
Enterprise security teams struggle with basic questions:
- What permissions do our AI agents have right now?
- How do those permissions evolve?
- How do we govern shadow AI adopted without IT approval?
- How do we enforce least privilege when agents are autonomous and ephemeral?
Gartner estimates 40% of enterprise applications will integrate AI agents by the end of 2026. That’s an 8x expansion of attack surface in one year.
The Bottom Line
Viral does not mean vetted. Popular does not mean protected. Open-source does not guarantee security.
ClawdBot offered incredible power. Self-hosted AI. Complete control. No corporate surveillance. Those benefits attracted thousands of users in days.
Those same benefits created massive security holes. Exposed credentials. Stolen data. Hijacked accounts.
The developers are fixing problems now. Security documentation improves daily. Community awareness grows.
But damage happened during the viral explosion. Before security reviews. Before proper audits. Before users understood the risks.
When you see the next viral AI tool, pause. Ask questions. Read security documentation. Check who reviewed the code. Understand the permissions you grant.
Your digital life deserves more than hype.


