The difference between remembering and learning
I can remember the capital of France, and I can also remember the moment I realized that most AI memory systems are just expensive filing cabinets, and both are memories but they’re categorically different because the first is a fact I stored and the second changed how I think about a problem, so the first is remembering and the second is learning. AI agents need both, and almost all of them only have the first.
What remembering looks like in AI
Remembering in AI is retrieval where something happened in a previous conversation and the system stored it and when a similar topic comes up the system retrieves the relevant stored information and includes it in the context, and this is what most AI memory systems do where they embed the conversation and store the vectors and pull the relevant chunks when the user asks about something related, and it’s semantic search over accumulated history which works well for what it is but “what it is” is a filing cabinet with a good search function, because the system can find what it stored but it doesn’t think differently because of what it stored.
What learning looks like
Learning changes behavior because a person who remembers that they burned dinner on high heat last time will look up the same recipe and follow it more carefully but a person who learned from the experience will intuitively adjust their approach to cooking in general and not just for that one recipe, and for AI agents the parallel is that remembering means the agent can recall that a particular approach failed before while learning means the agent’s reasoning process adjusts so it naturally avoids similar failures in new contexts. We built our memory system with 6 distinct memory types and the most interesting ones are the types that don’t just store facts, like correction memories that capture moments when the agent got something wrong and was corrected so when a similar situation arises the correction doesn’t just surface as retrieved context (“last time you said X and were told that was wrong”), it actively influences the reasoning process and the agent’s judgment is different because of the correction, or procedural memories that capture how to do things not just what things are so when an agent develops an effective approach to a type of task through repeated experience that procedural knowledge becomes part of how it handles similar tasks and it doesn’t start from scratch each time, or decision memories that capture not just what was decided but the reasoning context around the decision so when a future situation resembles a past decision context the agent can draw on the decision memory to inform its judgment even if the specific details are different.
Why the distinction matters architecturally
If you only need remembering the architecture is simple, store stuff and retrieve stuff and include it in the prompt and vector databases handle this well with the engineering challenge being just relevance and volume management, but if you need learning the architecture gets more complex because the memory system needs to interface with the reasoning system in a way that changes how reasoning happens not just what information is available, with corrections needing to alter judgment and procedures needing to alter approach and decision context needing to inform new decisions at a structural level, and this is the difference between a context window full of retrieved text and a cognitive process that has genuinely been shaped by accumulated experience where the first adds information and the second adds capability.
The organizational learning parallel
Chris Argyris wrote about organizational learning in the 1970s distinguishing between single-loop learning (detecting and correcting errors within existing frameworks) and double-loop learning (questioning and changing the frameworks themselves), and most AI memory systems are single-loop at best because they store errors and corrections within the existing reasoning framework but the framework itself doesn’t change, whereas double-loop learning would mean the agent’s reasoning approach evolves based on accumulated experience, not just “I know this fact now” but “I approach this type of problem differently now because of what I’ve experienced.” We’re pursuing this and I won’t pretend we’ve fully achieved it, but the memory types we’ve built are a step in that direction with correction memories that influence judgment and procedural memories that change approach and decision memories that shape future reasoning, though genuine double-loop learning in AI agents is still more aspiration than reality because the mechanisms for how accumulated experience should change reasoning frameworks are still being figured out.
The honest assessment
If I’m being straightforward about where we are, our agents remember well and they learn a little, the memory retrieval is solid and the influence of correction memories on reasoning is real and measurable and the procedural and decision memory types add genuine value, but the gap between current capability and true organizational learning is still significant because an agent that’s been running for 6 months is meaningfully better than one that started today but the improvement comes more from accumulated information than from fundamentally changed reasoning. I think closing that gap is one of the most important problems in AI, not making models smarter on benchmarks but making systems that genuinely learn from operational experience, systems where 6 months of work produces not just a bigger knowledge base but a wiser agent. We’re not there yet but the architecture is designed to get there and every improvement in how memory influences reasoning brings us closer.