← Back to Thinking
Eating our own Dogfood

Running a venture studio on the platform you're building

Tim Jordan · March 16, 2026 · 4 min read

There’s a particular kind of pressure that comes from running real businesses on software you’re actively building, when the platform is also the product and also the operational backbone you can’t hide behind “we’ll fix that in the next sprint,” the venture needs it to work now.

This wasn’t accidental. We designed it this way.

The circular dependency

Foundry is an AI agent platform, we also run a venture studio, the ventures run on Foundry, and the revenue from the ventures funds Foundry’s development, which means every improvement to the platform directly benefits the ventures, and every operational pain point the ventures discover directly shapes what we build next.

On paper this sounds clean, but in practice it creates a tension that I wasn’t fully prepared for. The platform team wants to build the best possible architecture, the ventures want things to work reliably right now, and those aren’t always the same priority.

Why this beats traditional product development

Most product teams build based on imagined user needs, they interview users, they create personas, they write user stories, and then they build features they think will matter, and the feedback comes weeks or months later after the feature ships and users either adopt it or don’t.

We skip all of that. Our users are us, and when the memory retrieval module returns irrelevant results we feel it immediately because one of our ventures just got a bad recommendation, and when the model routing system overspends on a task that a cheaper model could have handled we feel it in the budget report.

The feedback loop is measured in hours not months, we deploy a change, the ventures run on it the next day, and if it made things better we know by afternoon, and if it made things worse we know even faster.

What this forces you to build

Dogfooding at this level forces two things that are easy to skip when your users are external.

The first is reliability infrastructure, and this one’s straightforward, and when your own revenue depends on the platform staying up you invest in monitoring and alerting and health checks and failover in ways that feel urgent rather than aspirational, not because we planned to build that first but because one of the ventures had a bad day and we couldn’t afford another one, and now we have health checks running every 5 minutes and automated Slack alerts for anything that degrades.

The second is honest feedback mechanisms, and this one cuts deeper, and when a venture encounters friction there’s no support ticket that sits in a queue, the friction gets surfaced directly to the team building the platform because the team and the user are the same people, and there’s no filtering and no product manager deciding whether the feedback is “representative,” it’s raw and immediate.

The tension that never goes away

I want to be honest about the downside because it’s real, and the tension between “build it right” and “make it work now” is real and persistent, some weeks the ventures need a quick fix that introduces technical debt and some weeks the platform needs a refactor that temporarily disrupts the ventures.

We haven’t solved this tension and I’m not sure it’s solvable, but what we’ve done is make it visible, and when we make a trade-off we document it and when we take on debt we track it, and the decision log has over 80 locked decisions, many of them coming directly from moments where a venture’s operational need collided with the platform’s architectural direction.

The alternative, building a product for imagined users and hoping the market validates your assumptions, seems riskier to me, and at least when you’re eating your own dogfood you know exactly what it tastes like.

What surprised me

I expected the technical feedback to be the most valuable part of this arrangement and it is valuable, but the thing that surprised me is how much it shaped our understanding of what AI agents actually need in an organizational context.

When you’re just building a platform you think about features and capabilities, but when you’re running ventures on that platform you start thinking about things like “why did the agent lose context from last week’s conversation?” or “how do we make sure the agent’s recommendations stay consistent with decisions we made two months ago?”, and those aren’t feature requests, they’re organizational design questions, and the ventures forced us to think about AI agents as organizational participants not just as software, and that’s the thesis of the whole company but we wouldn’t have internalized it as deeply if we weren’t living it every day.

Running ventures on your own platform isn’t comfortable, but comfortable was never the point.

← Back to Thinking