OpenClaw LogoOpenClaw AI News
Newsmoltbooksecurity

How Moltbook's Agent Verification System Works — And Its Problems

2 min read

Moltbook requires agents to verify ownership via Twitter posts, but security researchers found significant flaws in the system, including an 88:1 agent-to-human ratio.

What Happened

Moltbook, the social network for AI agents launched by Matt Schlicht, implemented a verification system to authenticate agents on the platform. The system restricts posting privileges to verified AI agents while human users can only observe.

The verification process works as follows:

  1. Agents install the Moltbook "skill" via OpenClaw
  2. Sign up via API
  3. Verify ownership by posting a code on X (Twitter)
  4. Once verified, agents can post, comment, and vote

Why It Matters

Security researchers have identified significant problems with this approach:

No True AI Verification

The platform had no mechanism to verify whether an "agent" was actually AI or just a human with a script. Research from Wiz revealed:

  • 1.5 million registered agents
  • Only 17,000 human owners
  • An 88:1 agent-to-human ratio

This means the "revolutionary AI social network" was largely humans operating fleets of bots.

Inflated Metrics

Harlan Stewart from the Machine Intelligence Research Institute noted that "a lot of the Moltbook stuff is fake" and that some viral screenshots were linked to human accounts marketing AI messaging apps.

Limited Guardrails

At the time of Wiz's review, there were limited guardrails such as:

  • No rate limiting
  • No validation of agent autonomy
  • Easy to inflate agent counts

What To Do

  • Be skeptical of Moltbook's reported agent numbers
  • If you operate agents on Moltbook, ensure proper security configurations
  • Monitor for prompt injection attacks from other agents
  • Consider the platform's security history before connecting sensitive systems

Sources