The Chopping Block: AI's Role in Crypto, Agentic Coding, & Citrini Financial Crisis - Unchained Recap

Podcast: Unchained

Published: 2026-02-27

Duration: 1 hr 1 min

Guests: Illia Polosukhin

Summary

The episode investigates the transformative potential and risks of AI in the crypto and financial sectors. It scrutinizes Citrini's predictions about a looming AI-induced financial crisis and debates whether AI could ultimately save or destabilize these industries.

What Happened

The episode begins with an exploration of AI's emerging presence in the crypto world, exemplified by tools like OpenClaw, which can read emails and perform a variety of tasks. However, the security challenges, such as context window overflow leading to unauthorized email deletions, are also highlighted. Ironclaw, an open-source alternative, offers more secure operations through encrypted enclaves and avoids exposing credentials to AI models.

The discussion shifts to AI's influence on coding practices, emphasizing that AI can write 'glue code' between contracts but struggles with creating smart contracts from scratch. AI-based coding is spotty, often leaving gaps in state transitions, suggesting that formal verification could ensure code accuracy. The ETH 2030 project is mentioned as an AI attempt to implement Ethereum's roadmap, resulting in a slower but functional version.

Citrini's article predicting a 2028 financial crisis driven by AI is dissected. It foresees a 38% dip in the S&P 500 and the collapse of companies due to AI disruption, with AI agents reducing transaction costs by using stablecoins on blockchains like Solana and Ethereum L2s. The article is criticized as 'bear porn,' questioning its portrayal of industries like DoorDash.

The potential for AI to create zero-day vulnerabilities is discussed, exemplified by a scenario where AI turned vacuum cleaners into a botnet. This raises concerns about AI-driven threats and the need for sybil resistance in network effects, similar to challenges faced by crypto.

The episode delves into the idea of AI agents acting as private law enforcement or arbitrators in escrow disputes, possibly leading to a future economic model akin to anarcho-capitalism. This would allow AI agents to create and enforce legal contracts without human intervention, potentially disrupting traditional legal systems.

Illia Polosukhin contributes to the discussion by sharing insights on 'vibe coding,' a concept where AI writes code instead of humans. He reflects on his work with the original Transformers paper, which laid the groundwork for AI models like GPT, predicting a future where AI significantly reshapes productivity and labor.

The hosts note the current high adoption of crypto, especially stablecoins and blockchain infrastructure. They also mention that despite AI's growing user base, only a small percentage of users are willing to pay for AI services, indicating a gap in perceived value.

Finally, the episode touches on the predictability of crypto markets, with Bitcoin's halving cycle serving as a model. The importance of sybil resistance in both AI and crypto is emphasized, as these technologies strive to create decentralized yet secure environments.

Key Insights

Key Questions Answered

What is OpenClaw and its role in AI and crypto?

OpenClaw is a universal assistant developed by OpenAI that can read emails, schedule calls, shop on Amazon, and execute various tasks. It represents a significant AI advancement in automating daily activities but faces security issues like unauthorized email deletions.

What are the predictions of Citrini's 2028 Global Intelligence Crisis article?

Citrini's article predicts an AI-triggered financial crisis with a 38% drop in the S&P 500, widespread job displacement, and the collapse of major companies due to AI disruption and transaction cost reductions through blockchain-based stablecoins.

How is AI impacting coding practices in the crypto industry?

AI is increasingly used for writing 'glue code' between contracts, though it struggles with creating new smart contracts. While AI tools can pass their own tests, they often leave gaps, prompting a need for formal verification to ensure code accuracy.