When “Helpful” AI Becomes a Real Security Risk: A Warning About Clawdbot

 Mike Esson By: Mike Esson
When “Helpful” AI Becomes a Real Security Risk: A Warning About Clawdbot
5:36

Over the past few weeks, a new AI tool called Clawdbot (recently rebranded under names like Moltbot and OpenClaw) has gained attention online.  

It’s often described as “an AI assistant that actually does things for you.” That description is accurate, and exactly why the tool deserves careful consideration before using it. 

For most individuals and families, Clawdbot presents meaningful security risks. And for households with children or teens exploring AI tools, it’s worth having a clear, informed conversation about why some technologies are not appropriate for use.  

Let’s walk through what’s different about this tool and where the risks come in.  

What is Clawdbot?

This discussion is based on publicly available information and general cybersecurity principles. Plancorp has not conducted a formal security audit or code review of Clawdbot or its related projects.

Clawdbot is not like traditional AI assistants such as ChatGPT, Siri, or Alexa.  

Most widely used AI tools simply respond to questions. Clawdbot is designed to act on a user’s behalf.  

Depending on how it’s configured, it can:

  • Read and send emails  

  • Access calendars and messages 

  • Control a web browser 

  • Run commands directly on a computer 

  • Store long-term information about user behavior and preferences  

Cybersecurity pros often describe this level of access as “the keys to your digital life.” That kind of capability can be powerful, but without strong safeguards, it also increases the potential for misuse or exposure.  

The Core Issue: Limited Built-In Safeguards

Clawdbot is open-source and self-hosted, meaning users are solely responsible for setting up and securing it themselves. In practice that means:

  • Security protections are not enabled by default
  • Safe operation depends heavily on advanced configuration
  • Small mistakes can have major consequences

This isn’t about a single flaw or bug. It’s about how much access the tool is designed to have—and how little margin for error exists for non-experts.

What Could Be Exposed

When cybersecurity professionals raise concerns about tools like Clawdbot, they’re not talking about abstract risks. They’re talking about everyday information people reasonably expect to remain private.

Depending on how the tool is used, exposure could include:

  • Login credentials for email, cloud storage, financial platforms, and other connected services
  • Private communications, including emails, messages, and calendar data
  • Browsing activity and saved sessions, potentially including banking, healthcare, or investment portals
  • Personal documents and files, such as tax records, financial statements, IDs, or photos
  • Accounts capable of spending money, including subscriptions, travel bookings, or online purchases
  • Long-term behavioral data, such as routines, schedules, and household habits
  • Indirect access to other systems, including work, school, or shared family accounts

These outcomes aren’t edge cases. They follow directly from how the tool is designed to operate.

“But It Runs on My Own Computer — Isn’t That Safer?”

This is a common assumption, but it’s not always accurate.

Because Clawdbot can execute commands, control applications, and interact with external services, any vulnerability or misconfiguration can grant the same level of access to someone else that the user has themselves.

Even the tool’s creator has acknowledged that running an AI agent with direct system access requires a high level of caution and technical expertise.

Why This Matters for Families

Encouraging curiosity around technology and AI is healthy. But not every tool is appropriate for learning or experimentation.

Children and teens using advanced AI agents may unintentionally:

  • Approve permissions they don’t fully understand
  • Install unverified software
  • Expose shared family accounts or cloud storage
  • Share access keys tied to paid or work-related services

Security researchers have already documented real-world misuse of tools like this—not just hypothetical concerns. Once sensitive data is exposed, it’s often impossible to fully reverse the impact.

A Practical Rule of Thumb

If a tool:

  • Acts autonomously
  • Has access to personal accounts
  • Can execute commands on your device
  • Requires advanced security knowledge to operate safely

…it’s likely not suitable for most households. Clawdbot meets all of these criteria.

What We Recommend Instead:

  • Use well-established AI tools with clear security standards and vendor accountabilit

  • Avoid experimental AI agents on personal or family device

  • Be cautious with tools that require manual credential, firewall, or port managemen

  • Talk openly with family members about why certain technologies are off-limits

Curiosity is valuable. Thoughtful boundaries are, too.

Other Cybersecurity Resources from Plancorp

> AI & Imposter Scams: 7 Ways to Protect Yourself

> Unmasking Financial Advice Scams: How to Spot One and Ways to Protect Yourself

> What to Know About Pig Butchering Scams

The Bottom Line

Clawdbot is an impressive technical experiment, but it is not designed for casual or general use. For most people, the risks outweigh the benefits. When it comes to powerful technology, uncertainty is often a signal to pause, not proceed.

New technologies tend to be riskier when first launched and eventually improve security protocols over time. But our recommendation is to wait until those improvements are in place before exploring.

If you’re concerned that your personal information may be at risk due to tools like Clawdbot, we recommend getting in touch with your Wealth Manager right away. We can help secure your accounts and re-establish protections.


References to specific technologies are for illustrative purposes only and do not constitute an endorsement or a comprehensive security evaluation. 

As Plancorp's Chief Information Security Officer, Mike oversees the integrations, efficiencies and security of our internal processes. He draws on years of experiences in sales, operations, compliance, technology, cyber security, and data protection to optimize our workflows and safeguard our data. He works closely with other teams and resources to manage diverse projects and implement ongoing strategies. When he is not working, he loves spending time with family and friends. He is an outdoor enthusiast and a handyman who loves to create and repair things in his spare time. More »

Disclosure

For informational purposes only; should not be used as investment tax, legal or accounting advice. Plancorp LLC is an SEC-registered investment adviser. Registration does not imply a certain level of skill or training nor does it imply endorsement by the SEC. All investing involves risk, including the loss of principal. Past performance does not guarantee future results. Plancorp's marketing material should not be construed by any existing or prospective client as a guarantee that they will experience a certain level of results if they engage our services, and may include lists or rankings published by magazines and other sources which are generally based exclusively on information prepared and submitted by the recognized advisor. Plancorp is a registered trademark of Plancorp LLC, registered in the U.S. Patent and Trademark Office.

Join the List

Get top insights & news from our advisors.

No spam. Unsubscribe anytime.