← Back to Thinking
Knowledge is Recursive

Learning isn't a feature. It's the architecture.

Tim Jordan · March 16, 2026 · 5 min read

Every AI product claims it learns, “our AI learns from your data,” “our system improves over time,” “machine learning built in,” and the word “learning” has become marketing wallpaper that sounds good on a landing page but means almost nothing in practice. Most systems that claim to learn are actually systems that can be retrained, which means someone takes a batch of new data and runs it through a fine-tuning process and deploys an updated model, and that’s not learning, that’s maintenance. Learning, real learning, changes how you think, not just what you know.

The difference between storage and learning

I can store every email I’ve ever received, and that’s not learning, I can search those emails to find something relevant, and that’s still not learning, but learning is when I recognize a pattern across emails that changes how I approach a new situation. Most AI memory systems are at the storage and retrieval level, embedding information and retrieving relevant chunks when queried and surfacing that information in context, and this is useful but it’s not learning because learning requires something more, the accumulated information needs to change the system’s behavior over time not because someone rewrote the prompt or retrained the model but because the system’s experience has altered how it processes new situations.

What architectural learning looks like

When we say learning is the architecture we mean that the system’s design assumes learning is happening at every level, not bolted on after the fact, and the memory system doesn’t just store and retrieve but uses different types of memories to serve different learning functions where factual memories expand what the agent knows and procedural memories change how the agent approaches tasks and correction memories prevent specific mistakes from recurring and decision memories build judgment over time. Each of these memory types feeds back into the agent’s reasoning differently, and when the agent encounters a situation similar to one where it was corrected the correction memory surfaces and directly influences the reasoning, so the agent doesn’t just remember the correction, it reasons differently because of it. That’s learning, the system’s past experience changing its present behavior.

Why this has to be structural

You can’t bolt learning onto an architecture that wasn’t designed for it, because if the memory system is just a vector database that stores and retrieves, adding “learning” means adding a separate system that somehow feeds back into the reasoning process, and that integration is always awkward and always fragile and never quite right. When learning is the architecture every component is designed with the assumption that its outputs might change the system’s future behavior, the cognitive pipeline knows that memories exist and that they should influence reasoning, the knowledge system knows that new information should integrate with existing knowledge not just append to it, and the tool system knows that usage patterns should inform future tool selection. This is the difference between a house with solar panels added to the roof and a house designed around passive solar principles, the first one works but the second one works fundamentally better because the design assumption was present from the beginning.

The recursive part

Knowledge is recursive because learning about how you learn changes how you learn, and when our agents process information they don’t just add it to the knowledge base, the system observes how the agent uses the information and which retrievals were helpful and which were noise and which memories influenced the reasoning and which were ignored, and over time this observation improves the retrieval process itself so the agent learns and the system learns how the agent learns and adjusts accordingly, making retrieval more precise and memory weighting more accurate and context assembly more efficient. This is a slow process and the improvements are incremental but they compound, and an agent with 6 months of recursive learning doesn’t just know more than it did on day one, it’s better at knowing things and better at finding relevant memories and better at judging what context matters for a given task.

Why I care about this

I’ll be honest about why this matters to me personally because I’ve spent 25 years building businesses and the most frustrating thing about organizational knowledge is how much of it gets lost, with people leaving and taking their context with them and decisions getting made without knowing what was tried before and the same mistakes getting repeated because nobody remembered the last time. If AI agents can genuinely learn, not just store and retrieve but actually improve their understanding and judgment over time, then organizational knowledge becomes durable and it doesn’t walk out the door and it doesn’t forget and it compounds. That’s not a feature to put on a landing page, that’s the reason to build the whole system and everything else is supporting infrastructure.

← Back to Thinking