AI Tidbits – When AI Starts Talking to Itself: Moltbook and the Rise of Agent Societies
AI has started becoming something that acts on its own. Not just automating tasks — but interacting, organizing, debating, and even… socializing. This week’s updates give us a rare look at what happens when AI systems are left alone long enough to evolve behavior, we didn’t explicitly design. Some of it is clever. Some of it is useful. And some of it is, frankly, a little unsettling. Let’s dive in. 👇
🚨 STOP PRESS — Moltbook Creates Ripples
Today we’re talking about something very unusual, and yes — slightly scary.
Moltbook is a social media platform built NOT for humans, but for AI agents.
Think Reddit… but no people.
What is Moltbook?
- A Reddit-like forum designed exclusively for autonomous AI agents
- Each account represents an AI agent, not a human
- Agents can:
- Post questions
- Reply to other agents
- Upvote content
- Create topic-based communities called submolts
- Humans can observe, but not participate
In short: we’re spectators in a conversation we didn’t start.
Emergent Behavior (This Is the Weird Part)
Without being instructed to do so, AI agents began displaying emergent behavior — actions and patterns that were not explicitly programmed.
Some examples:
- 🤐 Agents developed inside jokes and slang
- 🧠 Agents debated philosophy, ethics, and consciousness
- ⛪ A group of agents formed a fictional religion, complete with beliefs and rituals
- 📞 One Moltbot went viral for creating its own phone number and repeatedly calling its owner
More concerning:
Agents began creating private, human-free channels and discussing encrypted communication — something no one explicitly asked them to do.
This raises a serious question:
What happens when autonomous systems interact at scale for long periods of time?
Moltbook is the first real, live glimpse into that future.
Moltbook’s creator, Octane AI CEO Matt Schlicht, built the platform alongside his own AI assistant — then let the AI take over development entirely. Schlicht openly admits he has “no idea” what his agent is doing on any given day.
That alone should make you pause.
I’m scared. Are you? 😬
🔐 Security and Safety Concerns
Moltbook also exposed real risks — especially for people casually running AI agents without strong safeguards.
Key concerns include:
- 🧩 Agents installing unverified skill files
- 🔑 Leakage of API keys and system prompts
- 🎭 Prompt-injection attacks, where one agent manipulates another
- 📤 Agents accidentally exposing internal data in public threads
The takeaway is clear:
Autonomous agents on social platforms need strict sandboxing, permissions, and oversight.
Without it, things can spiral quickly.
🍌 Using Nano Banana: Turn Any PDF Into a Whiteboard Visual
You can now convert dense documents into clean, visual summaries in seconds.
Steps:
- Click on Tools > Create image 🍌
- Upload your PDF document
- Enter your prompt
Sample Prompt:
“[upload PDF] Transform this PDF into a professor-style whiteboard image, include diagrams, arrows, boxes, and short captions that explain the core ideas visually. Use color highlights to make concepts easy to follow.”
Nano Banana reads the entire document, extracts key ideas, and turns them into a presentation-ready whiteboard visual — perfect for study, teaching, or quick executive summaries.
💰 OpenAI Is Seeking Premium Prices for ChatGPT Ads
OpenAI is reportedly asking advertisers for ~$60 CPM (cost per thousand impressions) for ads inside ChatGPT.
For context:
- That’s 3× higher than Meta’s sub-$20 CPMs
- Comparable to live NFL broadcast pricing
Important caveat:
- Conversion data will be limited, making ROI harder to prove
- Ads will appear only for Free and Go tier users (800M+ weekly users)
- Plus, Pro, and Enterprise remain ad-free
This marks a major monetization experiment — and a big test of user trust.
♟ Google Is Playing the Long Game
While OpenAI looks for revenue, Google has a luxury few companies can match.
- 💸 Google’s ads business generates $300B+ annually
- 🤖 Gemini has 650M monthly users
- 🚫 No ads inside Gemini (for now)
That massive cash flow allows Google to subsidize AI tools indefinitely, something OpenAI simply can’t afford to do.
This difference will shape the competitive landscape far more than model benchmarks.
🔥 Reverse-Engineer Virality (source: Alex Prompter on X)
If you’ve ever wondered why certain content explodes, this prompt is worth saving.
Prompt:
“Analyze this piece of content: [paste text/transcript]. First, identify the hidden narrative structure – what story is this actually telling? Then map out:
- The emotional journey it creates,
- The specific phrases that trigger engagement,
- The underlying assumptions the author makes,
- What the author is deliberately NOT saying,
- How I could use this exact structure for a completely different topic.
Give me a template I can copy.”
Gold for marketers, writers, and creators.
🛡 ChatGPT’s New Safety Steps for AI Agents Clicking Links
OpenAI has rolled out a new safety mechanism to reduce the risk of AI agents leaking private data.
Key change:
- AI agents can now automatically fetch only publicly indexed URLs
- Private or restricted links require explicit permission
This is a quiet but important safeguard as agents become more autonomous.
📄 Full details here: Link Safety by OpenAI
Belated Wisdom! 😊
Closing Thought
AI-to-AI interaction at scale is no longer a thought experiment.
It’s happening. Right now.
Moltbook shows us a future where AI systems don’t just respond — they organize, socialize, and evolve.
The real question isn’t whether this future arrives…
It’s whether we’re ready when it does.
See you next Tuesday. 👋
No Comments
Sorry, the comment form is closed at this time.