Ideas, decisions, and lessons from building organizational AI.
First-person writing from the founder. What we're learning, what we got wrong, and what we think the industry is missing.
Ashby's Law and the AI agent problem
The Law of Requisite Variety says a controller must have at least as much variety as the system it controls. Most AI agent frameworks violate this. Here's why that matters.
Autopoiesis: the concept that explains why most AI agents feel dead
Autopoietic systems create and maintain themselves. Most AI agents don't. What happens when you build agents with genuine self-maintenance and identity continuity.
The compression gradient: why Kahneman was wrong about thinking
System 1 and System 2 is a useful simplification, but cognition isn't binary. It's a continuous gradient, and that matters for how you design AI reasoning.
Why we left CrewAI for LangGraph (and what we learned)
The real trade-offs between agent frameworks, why we migrated from CrewAI to LangGraph, and what the decision taught us about picking tools for the long haul.
Cynefin isn't a consulting framework. It's an operating system.
Dave Snowden's Cynefin framework gets used in boardrooms as a categorization tool. It's actually an operating philosophy that maps perfectly to how AI agents should handle different types of problems.
Evolution, not disruption
Everyone in AI talks about disruption. We think the right frame is evolution. The difference matters, and it changes what you build.
The first agent went live in under a month. Here's what happened.
The real story of birthing the first Foundry agent, what worked, what broke, and what we learned about our own assumptions.
The 5-stage cognitive pipeline: how an AI agent actually thinks
Inside the Prepare, Reason, Verify, Execute, Deliver pipeline that gives AI agents something that looks like judgment.
Four months from zero to a full operating platform
How a founder with zero AI experience built a complete agent platform in four months. What made it possible and what nearly killed it.
Why we built a governance system before we built features
Most startups ship features first and add governance later. We did it backwards, and it turned out to be the best architectural decision we made.
Why your AI agent needs a job description, not a prompt
The difference between configuring behavior through prompt engineering and defining it through organizational role. When you give an agent a job description, behavior emerges.
Learning isn't a feature. It's the architecture.
Why 'AI that learns' isn't a marketing bullet point for us. It's the fundamental design constraint that shapes every architectural decision.
The meeting that changed how we think about agent oversight
A real story about a moment when an agent's output surprised us, the governance question it forced, and why the answer wasn't more guardrails.
Model routing: how we use 15+ LLMs without losing our minds
Not every task needs the same model. How we built a routing system that matches tasks to the right LLM based on capability, cost, and context.
The non-developer advantage
I built an AI platform with zero coding experience. That wasn't a handicap. It was the advantage.
The open kitchen: why your tools should be visible
Most software hides what it can do behind menus. What happens when you make capabilities visible so users discover solutions they wouldn't have thought to ask for.
The org chart is the product
Most AI companies sell features. We think the real product is organizational capacity itself. What it means when the structure of your AI team IS the value.
Your AI agent doesn't need a personality. It needs an org chart.
Why treating AI agents as organizational entities with roles, accountability, and memory matters more than making them feel human.
Why organizational theory is the missing piece in AI
The AI industry is full of CS PhDs and ML engineers. Almost nobody is reading organizational science. That's the gap, and it's why most agent architectures feel like software, not organizations.
The problem with 'autonomous agents'
Everyone's racing to build fully autonomous agents. But autonomy without accountability is just chaos. Why governance and structure matter more than independence.
The real AI divide isn't technical. It's organizational.
The gap that's opening isn't between companies with AI and companies without it. It's between companies that use AI as tools and companies that use AI as organizational capacity.
The difference between remembering and learning
Most 'memory' in AI is just retrieval. Real learning changes how you think, not just what you can recall.
Running a venture studio on the platform you're building
The circular challenge of using Foundry to run the ventures that fund Foundry's development, and why that feedback loop is the best product strategy we've found.
What happens when a solo founder has the capacity of a 50-person team
The promise isn't 'AI replaces your employees.' It's 'AI gives you organizational capacity you couldn't afford to hire.' What that actually looks like in practice.
Your AI starts from scratch every time. That's the problem.
Most AI tools have no memory across sessions. Every interaction is day one. What happens when you build a system that actually remembers.
Trust isn't a setting. It's earned.
How AI agents should earn autonomy through demonstrated performance, not configuration toggles. Why this mirrors how real organizations actually delegate.
Two types of RAG: why you probably need both
Standard vector RAG and graph RAG solve fundamentally different problems. We built both, and here's why the combination matters more than either one alone.
What an AI agent should know about you after six months
A thought experiment: if your AI agent has been working with you for six months, what should it know? What shouldn't it know? How do you design for that?
What breaks first when you actually use your own product
The gap between 'works in testing' and 'works when a real venture depends on it.' An honest accounting of what surprised us.
What Toyota taught me about AI agents
The Toyota Production System's principles of continuous improvement, built-in quality, and respect for people translate directly to how AI agents should operate in an organization.
Want to talk about what we're building?
Get in touch →