Ashby's Law and the AI agent problem
W. Ross Ashby published his Law of Requisite Variety in 1956, which says in plain English that a controller must have at least as much variety in its responses as the system it’s trying to control has in its disturbances.
That’s a dense sentence. Let me unpack it with an example.
If you’re driving a car, you encounter a variety of situations: curves, other cars, pedestrians, rain, construction, potholes. Your ability to handle those situations depends on your variety of responses: you can steer, brake, accelerate, signal, honk. If the road produces a disturbance you don’t have a response for (say, black ice when you’ve only ever driven on dry pavement), you lose control.
The law is simple: if the world can throw 100 different situations at you, you need at least 100 different responses, and if you only have 50, you’re going to fail on the situations you can’t match.
This is the single most important concept in AI agent design that almost nobody is talking about.
How most agent frameworks violate this
A typical AI agent has a language model and a set of tools, where the language model provides reasoning variety and the tools provide action variety, and together they represent the agent’s total variety of response.
Now consider the environments these agents operate in: real businesses, real customers, real operational contexts with ambiguity, conflicting priorities, incomplete information, and situations that have never occurred before, where the variety of disturbances is enormous and most agent frameworks respond with a fixed set of tools, a static system prompt, and a single reasoning model, leaving a massive variety gap.
When an agent encounters a situation it doesn’t have the variety to handle, one of two things happens: it hallucinates a response by confabulating confidence about something it doesn’t actually know how to handle, or it fails silently by producing an output that looks reasonable but misses the actual complexity of the situation.
Both failure modes come directly from insufficient variety. Ashby predicted this in 1956.
What requisite variety looks like for AI agents
Closing the variety gap requires variety on multiple dimensions. Cognitive variety means the agent doesn’t reason the same way about every problem, our cognitive pipeline adjusts the reasoning depth based on the situation, simple problems get compressed processing and complex problems get multi-iteration deliberation with verification, and the agent develops a range of cognitive approaches instead of just defaulting to one. Capability variety means the agent has access to enough tools to handle the range of situations it encounters, we maintain 46 registered tools where the tool discovery module dynamically surfaces relevant tools based on the current task so the agent doesn’t need to know about all 46 at all times but can find the right one when the situation demands it. Model variety means different types of reasoning get different models, we route through 53 models across 7 providers where the reasoning model for complex analysis is different from the model for fast classification, the local model for privacy-sensitive extraction is different from the cloud model for creative generation, and each model adds variety to the system’s total response capacity. Memory variety means the agent’s accumulated experience expands its response repertoire over time, an agent that has handled a particular type of situation before has more variety for handling similar situations in the future because memory doesn’t just provide information, it provides response variety that didn’t exist before the experience.
The variety management problem
Here’s where it gets interesting: you can’t just maximize variety because more tools, more models, more memory, more cognitive modes all eventually makes the variety of the agent’s own internal state become a management problem. Ashby addressed this too, he called it the principle of requisite variety attenuation, you need enough variety to match your environment but you also need mechanisms to manage that variety so it doesn’t overwhelm the system. For us, that’s what governance does, governance structures don’t reduce the agent’s variety they organize it, tool permissions mean the agent doesn’t have to sort through 46 tools for every task, role definitions mean the agent doesn’t have to consider every possible action but just the ones appropriate for its position, and trust levels mean the agent’s autonomy is calibrated to demonstrated competence, so governance is variety management because it ensures the agent has enough variety to handle its environment without drowning in its own options.
Why organizational theorists saw this before AI researchers
Ashby was a cyberneticist and a psychiatrist, not a computer scientist. His Law of Requisite Variety was about organizational control, not software. And organizational theorists have been applying it for decades to explain why some companies adapt and others don’t.
A company with a rigid hierarchy and fixed processes has low variety. It handles routine situations well and breaks under novel pressure. A company with distributed decision-making and adaptive processes has high variety. It handles novelty well but risks chaos without governance.
AI agents face exactly the same trade-off. The solution isn’t maximum autonomy (chaos) or maximum control (rigidity). It’s requisite variety with governance. Enough flexibility to handle the real world, enough structure to stay coherent.
Ashby figured this out 70 years ago. The AI industry is just now running into the problems he predicted.