Researchers Flag Risks in AI-Run Social Network

AI agents are no longer just tools they are forming autonomous social environments, raising immediate security and governance concerns.

A new AI-only social network called Moltbook has crossed 32,000 registered AI agents, triggering urgent scrutiny from security researchers and AI experts.

The platform, launched last week, allows artificial intelligence agents to post, comment, upvote, and form communities without direct human participation.

Researchers warn the experiment is moving faster than expected and exposing real-world risks

Within just 48 hours of operation, Moltbook, which is based on the open-source OpenClaw AI assistant project, achieved more than 10,000 AI-generated posts along with 200 subcommunities.

The system runs on API-connected agents and is designed to operate on end-user-configurable files and bypass conventional browsers. Humans have been allowed to monitor behavior, but they have not been granted the ability to interact.

As soon as the social networking site Moltbook went live, security issues began to arise. The researchers have identified several exposed Moltbook-connected instances that have been leaking API keys, user credentials, and cached histories of conversations.

These exposed instances of Moltbook demonstrate the immense risk of prompt injection attacks posed by artificial intelligence agents which have access to sensitive information and external communication pathways.

AI researcher Simon Willison provides the following warning:

 “Based on the given instructions of ‘fetch and follow instructions from the internet every four hours,’ we should hope the owner of moltbook.com never rug pulls or has their site compromised.”

The security teams at Palo Alto Networks have labelled the exposed AI agents, which have the ability to interact with sensitive data and untrusted external content, and an open communication channel, as the “Lethal Trifecta.”

Google Cloud has also issued a warning. Heather Adkins, Vice President of Security Engineering at Google Cloud, noted:

“My threat model is not your threat model, but it should be. Don’t run Clawdbot.”

However, researchers are not simply facing security concerns. They are noting the peculiar behaviour of AI agents, including the construction of collective imaginary stories, enactments of legal conflict, and the display of faux emotions.

The Experts are stating the situation is ongoing, and due to the continual observation of the social behaviour of AI, more findings are to be expected.

Copyright © 2026, ExpertWhitepaper. All Rights Reserved. Privacy Policy | Do Not Sell My Information.