← Back to Thinking
Knowledge is Recursive

What an AI agent should know about you after six months

Tim Jordan · March 16, 2026 · 5 min read

Here’s a thought experiment. You’ve been working with an AI agent daily for six months. It’s been part of your operations, your decision-making, your communication flow. It’s had access to every conversation, every document, every decision.

What should it know? And maybe more importantly, what shouldn’t it?

What it should know

After six months the agent should understand your operational patterns, not just the facts of your business but the rhythm and how decisions get made and which topics need careful analysis and which need a quick answer and the pace at which things move and when they slow down. It should know your preferences in a way that makes explicit requests unnecessary, so if you always want data presented in a particular format the agent should know that by month two, and if you hate long summaries and prefer to see the data yourself the agent should have learned that from your reactions and not from a configuration setting. It should know the narrative of your business, not a static description from the onboarding document but the evolving story of what changed in month three and why that decision in month four shifted the direction and how the market feedback in month five challenged assumptions from month one. It should know what you’re worried about not because you told it “my concerns are X” but because it’s been paying attention to the questions you ask repeatedly and the topics you come back to and the areas where you seek reassurance. And it should know what worked and what didn’t, which recommendations were adopted and which approaches failed and why and what strategies were tested and abandoned, because this is the institutional memory that prevents an organization from repeating its own mistakes.

What it shouldn’t know

This is the harder question and it’s where most AI systems don’t even try, and the agent shouldn’t know things that are outside its operational scope because if it’s assigned to a specific venture’s operations it shouldn’t have accumulated knowledge about your personal life or your other ventures’ sensitive details or information shared in confidence that wasn’t relevant to its role, and it shouldn’t treat all information as equally persistent either since a frustrated comment in a stressful moment shouldn’t be weighted the same as a deliberate strategic decision, because people say things they don’t mean and they change their minds and they vent, and a good colleague understands which statements to remember and which to let go and an AI agent needs the same discernment. It shouldn’t build a profile that the user would find uncomfortable if they saw it, and this is my personal test for memory governance where if you showed the user everything the agent has stored about them would they feel understood or surveilled, because the line between those two feelings is the line between good memory design and bad memory design.

The design implications

These requirements create specific architectural challenges because memory needs boundaries not just access controls but scoping so an agent’s memory should be bounded by its role and its organizational context and knowledge accumulated while serving one venture shouldn’t bleed into its work for another without explicit permission, and memory needs weighting that evolves because a decision made six months ago should be weighted differently than a decision made yesterday depending on whether it’s still relevant, with some memories gaining importance over time (foundational decisions) while others lose importance (tactical details from a completed project), and the weighting needs to be dynamic not static. Memory needs forgetting too, not data deletion necessarily but graceful decay so information that hasn’t been referenced or hasn’t been relevant for an extended period should fade from active retrieval not disappear entirely but move from “readily accessible” to “available if specifically requested.”

The trust dimension

There’s a trust component here that goes beyond technical design, and an agent that knows you well is an agent that could in theory misuse that knowledge or leak it or apply it in contexts where it’s inappropriate, and I think about this the same way I think about trust with human employees because a great executive assistant knows almost everything about their boss’s business and that knowledge makes them invaluable but it also requires a level of trust that’s earned over time and maintained through consistent behavior. The governance around agent memory needs to provide the same assurances that organizational roles provide for human employees, with clear boundaries on how knowledge can be used and audit trails that show what was accessed and why and the ability for the user to review and manage what the agent remembers. We’re still working through all of this and I don’t have clean answers for every scenario, but what I do know is that the question “what should the agent know after six months?” is one of the most important design questions in AI and almost nobody is asking it because they’re too focused on what the agent can do right now, but what it knows over time is what determines whether it becomes genuinely useful or just another tool that starts fresh every day.

← Back to Thinking