Quick Facts
- Category: AI & Machine Learning
- Published: 2026-05-01 11:11:22
- Maximizing Your Savings: A Step-by-Step Guide to Scoring Top Tech Deals Like the Galaxy Tab S11 Ultra and More
- AWS Updates: Claude Opus 4.7 and Interconnect Reach General Availability
- 10 Fascinating Facts About NASA Goddard's Visitor Center on Its 50th Anniversary
- The Developer's New Superpower: Spotting AI's Hidden Mistakes
- Novice Programmer Develops AI Agent to Hack Coding Leaderboards: A Breakthrough in Agentic AI?
A seismic shift in software testing is underway as AI-driven agents introduce non-determinism that breaks traditional methodologies, warns a top industry executive.
The Challenge
Fitz Nowlan, Vice President of AI and Architecture at SmartBear, said in a recent podcast that the core assumptions of software development are collapsing. "We are moving away from old assumptions about what code looks like and how it behaves," Nowlan stated.

The specific crisis involves testing MCP (Model Context Protocol) servers driven by large language models. These LLM agents produce different outputs for the same input, a problem known as non-determinism. "When you don't know what's inside the code because it's generated by an AI, you can't test it the old way," Nowlan explained. "You need a completely new approach."
Background
MCP servers act as bridges between AI models and external tools, becoming critical infrastructure for agentic AI systems. However, the stochastic nature of LLMs makes their behavior inherently unpredictable.
Traditional testing relies on known code paths and deterministic results. Testing a black-box AI that changes each time breaks this paradigm. "We're essentially testing a black box that changes every time," Nowlan noted.
What This Means
Nowlan argues that data locality and data construction are now more valuable than understanding source code. "When source code is easy to generate, the real asset is the data and how you construct it," he said.

This suggests a move from code-centric testing to data-centric validation. Teams will need tools that model expected data distributions and monitor outputs for anomalies, rather than focusing on code coverage or unit tests. Emerging techniques include property-based testing, statistical validation, and drift monitoring.
Key Implications
- Shift in QA Focus: From verifying code paths to validating data behavior.
- New Tools Needed: Frameworks that can handle uncertainty and non-determinism.
- Investment Required: Organizations must prioritize data construction and locality.
Nowlan concluded: "The era of deterministic testing is ending. We need to embrace non-determinism and build testing frameworks that can handle uncertainty."
Industry watchers say this could reshape development for safety-critical systems, autonomous agents, and compliance.