Anthropic Said No to the Pentagon. Now It's Blacklisted - next-wave Recap

Podcast: next-wave

Published: 2026-03-03

Duration: 1 hr 2 min

Summary

Anthropic refused to comply with the Pentagon's demands to use its AI for mass surveillance and fully autonomous weapons, leading to a significant conflict with the U.S. government. The episode explores the implications of this standoff and the potential impact on the AI landscape.

What Happened

Anthropic, the AI company known for its model Claude, is at the center of a conflict with the U.S. government after refusing to allow its AI to be used for mass surveillance of American citizens and fully autonomous weapons. This refusal has led the Pentagon to threaten to classify Anthropic as a supply chain risk, potentially blacklisting the company from all government work. The government has already begun reaching out to companies like Boeing and Lockheed Martin to assess their exposure to Anthropic.

Anthropic's stance is based on concerns about privacy and the reliability of AI in life-and-death scenarios, arguing that current AI systems are not yet reliable enough for fully autonomous weapons. The company has offered to work on R&D with the Department of Defense to improve AI reliability, but the offer has not been accepted.

The episode highlights the broader debate over who should control the use of AI in military operations - government officials or the CEOs of AI companies. Some industry leaders, like Palmer Luckey and the CEO of Palantir, argue that tech CEOs should not dictate military use of technology, while others support Anthropic's stance.

As the deadline looms for Anthropic to comply with Pentagon demands, the company remains firm in its decision, despite potential legal and financial repercussions. The Pentagon's actions could set a precedent for how AI companies interact with government demands in the future.

The discussion also touches on the capabilities of other AI models, noting that OpenAI and Google's models are already integrated into government systems, but lack the access to classified information that Anthropic's model had.

News broke during the episode that the U.S. government has decided to blacklist Anthropic, a move that could significantly impact the company's operations and relationships with other enterprises.

The episode concludes by exploring new developments in AI agents, including Perplexity Computer's model-agnostic agent and other tools being integrated into platforms like Notion and Cursor. These agents are becoming increasingly useful for automating tasks and managing workflows.

Key Insights

Key Questions Answered

What happened between Anthropic and the Pentagon on the Next Wave podcast?

Anthropic refused to allow their AI to be used for mass surveillance and fully autonomous weapons, leading the Pentagon to threaten to blacklist the company, potentially impacting its government contracts and enterprise partnerships.

Why did Anthropic refuse the Pentagon's demands?

Anthropic cited concerns about privacy and the reliability of AI in critical scenarios, stating their AI systems are not ready for fully autonomous weapons and should not be used for mass domestic surveillance.

What is Perplexity Computer's new AI agent?

Perplexity Computer introduced a cloud-based AI agent capable of using various large language models to automate tasks and manage workflows, offering a managed and model-agnostic alternative to OpenClaw.