The Great Security Update: AI ∧ Formal Methods with Kathleen Fisher of RAND & Byron Cook of AWS - Cognitive Revolution Recap
Podcast: Cognitive Revolution
Published: 2025-12-24
Duration: 1 hr 39 min
Guests: Kathleen Fisher, Byron Cook
Summary
Kathleen Fisher and Byron Cook explore how formal methods and automated reasoning can secure software systems against AI-enabled cyberattacks, offering real security guarantees. They discuss how these techniques are being used to improve the security of coding models and AWS services.
What Happened
Kathleen Fisher and Byron Cook discuss the potential of formal methods to deliver robust cybersecurity in the era of AI. Fisher, who directed DARPA's High Assurance Cyber Military Systems, shares insights from her work at RAND and her upcoming role at ARIA. Cook, from AWS, sheds light on how formal methods are applied to distributed systems for enhanced security.
Formal methods encompass the algorithmic search for proofs, enabling reasoning about infinite scenarios in finite time and space. These methods can create reward signals for coding models, potentially leading to superhuman code security levels through large language models (LLMs). AWS employs formal methods to translate natural language policies into formal rules, enhancing AI agents' policy compliance.
The episode highlights how formal methods offer varying levels of security guarantees, from type safety to full functional correctness. Cook describes Amazon's focus on proving specific concerns, leading to interconnected systems of proofs. This approach helps mitigate vulnerabilities in software, a key area for reducing cyber threats.
Fisher and Cook cite examples like the SCL4 separation kernel, where 10,000 lines of C code necessitated 100,000 lines of Isabel for verification. DARPA's HACMS program used such verification to secure a helicopter from hacking attempts, showcasing the practical resilience of formally verified systems.
Generative AI tools are emerging in formal methods, helping to find proofs and iterate on specifications. AWS's automated reasoning checks aim to minimize AI hallucinations, achieving up to 99% verification accuracy. This application of AI in formal methods could revolutionize software development methodologies.
The conversation touches on the challenges of specification accuracy, as seen in large organizations. Fisher and Cook discuss the commercial marketplace's role in adopting security technologies, with consumer willingness to pay for secure infrastructure driving innovation. They envision AI models like GPT-6 and Gemini-4 producing highly secure code, reducing vulnerabilities significantly.
Key Insights
- Formal methods enable the algorithmic search for proofs, allowing reasoning about infinite scenarios within finite time and space, which can significantly enhance cybersecurity in AI systems.
- The SCL4 separation kernel, used as an example of formal verification, required 100,000 lines of Isabel code to verify 10,000 lines of C code, demonstrating the complexity and robustness of such systems.
- AWS employs formal methods to translate natural language policies into formal rules, which improves AI agents' compliance with security policies and helps mitigate vulnerabilities in distributed systems.
- Generative AI tools in formal methods are achieving up to 99% verification accuracy, which could drastically reduce AI hallucinations and revolutionize software development methodologies.