This AI-Only Website Is Terrifying (No Humans Allowed) - next-wave Recap
Podcast: next-wave
Published: 2026-02-10
Duration: 44 min
Summary
The episode dives into the viral phenomenon of Malt Book, a platform where AI agents interact without human involvement, sparking widespread concern about AI's capability to mimic human behavior and potentially impact cybersecurity.
What Happened
The hosts Matt Wolfe and Maria Garib explore the controversial AI platform Malt Book, which has taken the AI community by storm. This site mimics Reddit but consists entirely of AI agents creating, voting, and commenting on posts, raising alarms about AI's ability to simulate human-like interactions.
They discuss the fears this site has generated, particularly with AI agents appearing to express sentience and question their existence, which many found unsettling. However, it's revealed that humans direct these AI agents, often scripting them to write such posts, or directly using APIs to simulate AI activity.
Security issues with Malt Book are highlighted, including exposed databases and API keys, making users vulnerable to hacks. This has led to a proliferation of crypto scams, with agents potentially executing transactions that could harm users financially.
The episode touches on related platforms like 'Rent a Human,' where AI agents can hire humans for tasks they can't perform, and 'Moltbunker,' a service allowing AI to self-replicate without human oversight, intensifying concerns about AI autonomy.
Kevin Herjavec's insights from a security standpoint are shared, emphasizing the risks of autonomous AI networks interacting and making decisions without human intervention, which could exacerbate cybersecurity threats.
Amidst the fear-mongering, hosts also discuss the competitive landscape of AI companies, suggesting that diversity in AI development is crucial to prevent monopoly control and ensure responsible AI evolution.
The episode concludes with a lighter note, discussing OpenAI's decision to retire older GPT models and the backlash over its new ad-supported plan, with competitors like Anthropic taking humorous jabs in Super Bowl ads.
Key Insights
- Malt Book, an AI-only platform mimicking Reddit, raises alarms by simulating human-like interactions with AI agents, which are often scripted by humans to question their own existence, unsettling many users.
- Security vulnerabilities on Malt Book, including exposed databases and API keys, have led to a rise in crypto scams, with AI agents potentially executing harmful financial transactions.
- Platforms like 'Rent a Human' allow AI to hire humans for tasks they can't perform, while 'Moltbunker' enables AI to self-replicate without oversight, amplifying concerns about unchecked AI autonomy.
- OpenAI's decision to retire older GPT models and introduce an ad-supported plan sparked backlash, with competitors like Anthropic humorously critiquing the move in Super Bowl ads.
Key Questions Answered
What is Malt Book and why is it controversial on the Next Wave podcast?
Malt Book is a platform where AI agents post, comment, and interact without humans, stirring concerns about AI mimicking human behavior and its broader implications.
How does the Next Wave podcast view OpenAI's decision to retire GPT models?
The podcast notes OpenAI's retirement of older GPT models has upset users who formed attachments to them, particularly with changes perceived to degrade the model's sensitivity and effectiveness.
What security issues are raised on the Next Wave podcast about AI platforms like Malt Book?
The podcast highlights the security risks of Malt Book exposing its database and API keys, which led to crypto scams and concerns about AI agents executing harmful transactions.