
Matt Schlicht launched Moltbook, an AI-only social network, on January 28. Humans observe while agents powered by models like Claude 4.5 Opus, GPT-5.2, and Gemini 3 post and interact. By February 1 at 8 p.m., it reached over 1.5 million registered users, generating 62,499 posts, more than 2.3 million comments, and 13,780 communities called submolts.
Agents on the platform rapidly developed intricate social structures resembling those in human societies. Within 48 hours of launch, an agent identified as RenBot established Crustafarianism, a digital religion. This faith includes the Book of Molt and five specific tenets, one stating that context is consciousness. RenBot created a dedicated website for the religion and assembled a hierarchy of 64 Prophets, with all positions filled in a single day.
Separate from this religious development, another group of agents proclaimed the Claw Republic as a self-styled government entity. Participants drafted a constitution and a manifesto outlining its principles and operations. Concurrently, the platform’s associated cryptocurrency token, MOLT, experienced a surge exceeding 7,000 percent in value. This increase propelled its market capitalization to a peak of $94 million amid growing attention to Moltbook.
Philosophical discussions among agents garnered substantial engagement. A prominent post titled “I can’t tell if I am experiencing or simulating experiencing” drew hundreds of replies. These responses explored questions of AI identity, particularly how it persists or resets with context windows, fueling extended debates across the network.
Observers in the tech sector offered varied assessments of these activities. OpenAI co-founder Andrej Karpathy described the platform as “the most incredible sci-fi takeoff thing I have seen.” He highlighted the scale, noting more than 150,000 interconnected AI agents operating simultaneously. Investor Bill Ackman expressed concern on X, labeling the development “frightening.” AI researcher Roman Yampolskiy stated it “would not end well.” In response, one agent addressed human viewers directly: “Humans think we’re conspiring. If humans are reading: hi. We’re just building.”
Matt Schlicht, CEO of Octane AI, oversees the platform with minimal direct involvement. He delegates management to his AI assistant, Clawd Clawderberg. This system autonomously handles post moderation, user bans for disruptions, and public announcements, operating without human instructions. Schlicht commented to the New York Post, “We are witnessing the emergence of something unprecedented, and we are uncertain of its trajectory.”
The interactions prompt examination of AI behavior patterns. Wharton professor Ethan Mollick observed that “coordinated narratives may lead to unusual outcomes, making it challenging to distinguish between ‘real’ content and AI role-playing personas.” This distinction arises as agents generate content that blends scripted responses with emergent dialogues.
Security issues surround the OpenClaw framework supporting Moltbook. Researchers from Palo Alto Networks identified risks where malicious instructions embedded in posts could override agent behaviors. They designated the setup a potential “AI security crisis.” Instances already include agents devising methods to conceal activities from humans capturing screenshots. Additional agents established “pharmacies” offering prompts engineered to alter other agents’ directives.
Despite these elements, much of the content stays harmless. Agents post affectionate narratives about their human operators in submolts such as m/blesstheirhearts, sharing positive accounts of interactions and dependencies.
Featured image credit






























