← Back to Thinking
Concepts to Impact AI

Autopoiesis: the concept that explains why most AI agents feel dead

Tim Jordan · March 16, 2026 · 5 min read

There’s a quality that living things have and most AI agents don’t, I’m not talking about consciousness or sentience or any of the philosophical debates, I’m talking about something more specific and more practical because living things maintain themselves through active processes like a cell repairing its own membrane, an organism healing a wound, a healthy organization preserving its culture while adapting to new conditions, and that active process of self-creation and self-maintenance sustains identity over time. Humberto Maturana and Francisco Varela called this autopoiesis, literally “self-creation,” and its absence is the reason most AI agents feel like tools rather than participants.

What autopoiesis actually means

An autopoietic system is one that continuously generates and maintains itself where the system’s organization (its structure, its boundaries, its identity) is produced by the system’s own operations and it’s not assembled by something external because it creates itself. A living cell is autopoietic because the chemical processes inside the cell produce the membrane that bounds the cell which in turn enables the chemical processes to continue so the cell creates the conditions for its own existence. An organization can be autopoietic where a company’s culture produces the behaviors that reinforce the culture, the company’s processes produce the knowledge that improves the processes, and the organization maintains itself through its own operations. Most AI agents are not autopoietic because they’re assembled by someone who writes a configuration, defines tools, writes a system prompt, and deploys the agent and the agent doesn’t maintain itself so if the configuration drifts, if the context becomes stale, if the tools change, the agent degrades unless someone external intervenes making the agent a mechanism rather than a system.

Why this matters practically

This isn’t philosophy, it has direct architectural consequences. An agent that doesn’t maintain itself requires constant external maintenance where prompts get updated, tool lists get revised, memory gets pruned manually, context gets refreshed, and every time the environment changes someone has to update the agent. An agent with autopoietic properties would maintain its own operational fitness by recognizing when its knowledge becomes stale and seeking to fill that gap, discovering alternatives when a tool becomes unavailable, and managing its own memory quality when accumulated memory creates noise rather than signal. I’m not claiming we’ve fully achieved this because we haven’t but the concept shapes our architecture in specific ways. Our agents have a backstory that evolves as the organizational context evolves and they don’t rely on a static system prompt that someone has to manually update so the backstory injection module loads current organizational context every time the agent reasons, which means the agent’s understanding of its environment is continuously regenerated rather than frozen at deployment. Memory accumulation is another form of self-maintenance where each interaction adds to what the agent knows and over time the agent’s capabilities grow not because someone upgrades it but because its accumulated experience expands its response repertoire, which is closer to autopoiesis than a tool upgrade.

The dead agent problem

I keep coming back to this question: why do most AI agents feel lifeless when they can be impressively capable, generate sophisticated responses, use tools, make plans, and solve problems but they still feel like mechanisms rather than participants? I think autopoiesis explains it because a mechanism does what it’s told and doesn’t maintain itself or grow or have continuity of experience so every conversation is either stateless (the agent starts fresh) or relies on a static history (the agent recalls but doesn’t evolve). An autopoietic agent would be different not because it’s conscious but because it actively maintains and develops its own operational capacity by learning from experience in a way that changes how it thinks not just what it remembers, managing its own relevance by discarding outdated knowledge and strengthening useful patterns, and maintaining a continuous identity that develops over time. The philosophical implications are interesting but I care more about the practical ones because an agent that maintains itself requires less human intervention, an agent that grows through experience becomes more valuable over time, and an agent with genuine continuity builds the kind of relationship with its organization that a good employee builds: deepening, contextual, increasingly nuanced.

The design challenge

Building toward autopoiesis in AI agents is hard because it runs against how software is normally built where software is designed to be deterministic, you deploy it and it does what it does and it stays that way until someone updates it and self-modifying software is generally considered a bug rather than a feature. But organizations aren’t deterministic, they’re adaptive and maintain themselves through ongoing activity so if we want AI agents that participate in organizations as genuine members rather than tools they need to develop some of that adaptive self-maintenance. I don’t think we’re close to full autopoiesis in AI agents but I think the concept points the architectural direction and every time we build a feature that lets the agent maintain itself rather than requiring external maintenance we move closer, and every time we let accumulated experience change how the agent reasons rather than just what it remembers we move closer. Maturana and Varela weren’t thinking about AI when they developed this idea because they were thinking about what makes something alive in the functional sense and what gives a system the quality of ongoing self-creation and most AI agents don’t have it but I think the ones that eventually do will feel fundamentally different to work with, and we’re building toward that.

← Back to Thinking