Cynefin isn't a consulting framework. It's an operating system.
The first time I encountered Cynefin, it was in a management consulting context where a facilitator put up the four quadrants on a whiteboard: Simple, Complicated, Complex, Chaotic, and we categorized our organizational challenges into the quadrants and discussed appropriate responses for each one, which was a useful exercise but also a complete misuse of the framework.
What Cynefin actually says
Dave Snowden didn’t create Cynefin as a categorization tool, he created it as a sense-making framework, and the distinction matters because categorization assumes you know what kind of problem you’re facing before you respond but sense-making means your response strategy IS how you figure out what kind of problem you’re facing, so you don’t categorize first and then act, you act in order to understand. In the Clear domain (Snowden renamed Simple to Clear in later versions), cause and effect are obvious and best practices work so you sense the situation, categorize it, and respond with the known correct answer. In the Complicated domain, cause and effect are discoverable but not obvious so you need expertise to analyze the situation, sense it, analyze it, and respond. In the Complex domain, cause and effect are only visible in retrospect so you can’t analyze your way to an answer because the system is too dynamic and instead you probe, sense what happens, and respond based on what emerges. In the Chaotic domain, there’s no discernible cause and effect so you act first to stabilize the situation, then sense where you are, then respond, and four domains means four fundamentally different approaches while most AI agent architectures use only one.
How this maps to AI agent cognition
Here’s what caught my attention when I started connecting Cynefin to agent design: most AI agents treat every problem the same way, a message comes in, the model reasons about it, the response goes out, and it’s the same process for a simple factual query and a complex strategic question, which is like using a hammer for every construction task when sometimes you need a hammer, sometimes you need a measuring tape, and sometimes you need to step back and redesign the whole wall. When I think about our cognitive pipeline through the Cynefin lens, the mapping becomes clear. Clear domain problems should compress to near-instant responses where the agent recognizes the pattern, retrieves the known answer, and delivers it with the cognitive pipeline minimizing overhead for these cases in a fast in, fast out way. Complicated domain problems need the full reasoning pipeline, orient the situation, analyze it with the right tools and knowledge, verify the analysis, deliver a considered response, and this is where multi-step reasoning with tool usage earns its keep. Complex domain problems need something different because the agent should probe rather than analyze by running a small experiment and surfacing the uncertainty rather than hiding it, saying “I’m not sure, but here’s what I’d suggest trying to learn more” rather than confabulating a confident answer. Chaotic domain problems need immediate stabilization because you act first so the agent should take the most obvious helpful action immediately and then assess what happened without deliberating when the situation needs speed.
Why most AI systems only work in Complicated
The entire AI reasoning wave is optimized for the Complicated domain with step-by-step reasoning, chain-of-thought, tool usage, and analysis, which are all Complicated domain techniques that work beautifully when the problem has a discoverable right answer that can be reached through analysis, but they fail in the Complex domain where the right answer only becomes visible after you act so an agent that tries to analyze its way through a complex problem will either confabulate an answer (the model is forced to produce output so it invents certainty where none exists) or loop indefinitely trying to reach a certainty that can’t be reached through analysis. They also fail in the Clear domain but in the opposite way because applying full reasoning to a problem that has an obvious answer is wasteful and adds latency and cost without adding value, and they fail catastrophically in Chaos because an agent that deliberates during a crisis is an agent that misses the window for effective action.
The design implication
If you accept the Cynefin framing, the implication for AI agent architecture is clear because the agent needs to determine what domain a problem falls in before deciding how to reason about it, and our orient phase does something like this by assessing the situation before committing to a reasoning strategy, asking is this familiar territory, is it analyzable, is it genuinely uncertain, is it urgent. I’m not going to claim we’ve fully implemented Cynefin in our cognitive pipeline because we haven’t, but the framework shapes how we think about what the pipeline should eventually do and it explains why “just make the reasoning better” isn’t sufficient as an architectural strategy because better reasoning helps in the Complicated domain but does nothing for Complex problems and it’s actively wasteful for Clear ones so the real architectural challenge is matching the cognitive strategy to the domain of the problem, and that’s a sense-making problem not a reasoning problem. Snowden’s been writing about this for decades and the AI industry would benefit from reading him.