Your documentation is a model, not a wiki page
Why structured architecture docs are the most undervalued asset in engineering, and how AI is about to change that
I released a new Miro integration for EventCatalog this week. You can drag your documented architecture onto a Miro board, design on top of it, and export the result back as documentation. The round-trip closes with AI writing the docs from the export.
But the feature itself isn't what I want to talk about. What I want to talk about is what it made me realize about documentation, and where I think this whole space is heading.
Documentation is not static content
Most people think of docs as something you write and put on a shelf. A wiki page. A README. Something that exists to be read. And honestly, for most of the history of software, that's been accurate. You write it, someone reads it, it goes stale, you rewrite it. Repeat.
But I think there's a shift happening, and I don't think enough people see it yet.
Part of it is the docs-as-code movement. More teams are writing documentation in markdown, storing it in git, versioning it alongside their source code, and reviewing it in pull requests. That alone changes things. Your docs become easier to maintain, easier to reason about, and easier to keep in sync with what's actually running. You get history, diffs, branch comparisons, code review. All the things we take for granted with source code, applied to documentation.
But docs-as-code also opens a door that I think most people walk past without noticing.
When you document your architecture properly, with services, events, schemas, relationships, owners, versions, stored as structured files in a repo, you're not writing a wiki page. You're building a model. A structured, queryable, connected model of how your system actually works.
That's a fundamentally different thing. A wiki page is for humans to read. A model is for humans and machines to work with.
The hidden value of structured documentation
Think about what you actually capture when you document an event-driven architecture well:
Which services exist and who owns them
What messages flow between them
The schemas those messages carry, down to individual fields
Which services produce and consume which events
How domains relate to each other
Version history and how things have changed over time
That's not content. That's a knowledge graph. And it's one that most organizations already have pieces of, scattered across Confluence pages, Miro boards, Slack threads, and the heads of senior engineers who haven't taken a holiday in two years.
The problem has never been that this knowledge doesn't exist. The problem is that it's trapped in formats that can't be used for anything other than reading.
Models unlock things wikis can't
Once your docs are a model, interesting things start happening.
You can detect when your architecture changes. We built architecture change detection in EventCatalog and it works because the docs aren't just prose, they're structured data that can be diffed across git branches. You can't diff a Confluence page to find out that a service quietly started consuming an event it didn't before.
You can trace fields across schemas. We shipped a schema fields explorer that indexes every field from every message schema in your architecture. It finds type conflicts, shows which events share a field, and lets you trace data lineage across services. You can't do that if your schemas live in a Google Doc.
You can pull your documented architecture into a design tool, model changes collaboratively, and push those changes back as documentation. That's what the Miro integration does. The round-trip only works because the docs are structured enough to export, transform, and reimport.
None of this is possible with static content. All of it is possible with models.
Now add AI to this
Here's where I think it gets really interesting, and where I see enormous hidden value that most people haven't connected yet.
If your documentation is a structured model, AI can do far more than summarize it. AI can reason about it.
We've been building this into EventCatalog in a few ways. EventCatalog Skills are structured instructions that teach AI coding agents how to read, query, and modify your catalog. And the EventCatalog MCP Server exposes your entire architecture model to any MCP-compatible client, so tools like Cursor, Windsurf, and Claude Desktop can query your services, events, schemas, and relationships directly. When an AI agent has access to your architecture model, it can answer questions like:
"What would break if I changed the schema of the OrderPlaced event?"
"Which teams would be affected if we decomposed the Payment service?"
"Show me every service that touches customer PII"
"Generate the documentation for this new service based on its AsyncAPI spec"
These aren't search queries. They're reasoning tasks that require understanding relationships, dependencies, and impact. A wiki gives you text to search through. A model gives AI something to think with.
MCP and the documentation-as-interface future
This connects to something bigger. With MCP (Model Context Protocol) and similar standards, your documentation doesn't just sit there waiting for someone to open a browser tab. It becomes an interface that AI agents can connect to, query, and act on.
This isn't hypothetical. We already ship an EventCatalog MCP Server that does exactly this. Connect it to Cursor, Windsurf, or Claude Desktop and your AI coding assistant can check the catalog before generating code to make sure it's using the right event schemas. It can look up who owns a service, what events it produces, what the schema looks like. A platform team's chatbot could answer "who owns this event?" without anyone needing to know where to look. A CI pipeline could query the catalog to validate that a new service's contracts match what downstream consumers expect.
Your docs become a live API for your architecture knowledge. Not a page someone reads, but a service that other tools and agents consume.
I think this is where the enormous, under-appreciated value of good documentation lives. The ROI of writing docs has always been hard to justify because the return was indirect: fewer questions in Slack, faster onboarding, less tribal knowledge. But when your docs are a model that machines can query, the return becomes direct and measurable. AI agents that write better code because they understand your architecture. Design tools that start from reality instead of a blank canvas. Governance systems that catch breaking changes before they ship.
The full circle
Here's what I keep coming back to.
People don't like writing documentation. That's not going to change. But AI is getting very good at generating documentation from structured inputs, specs, code, design sessions, conversations.
And once that documentation exists as a structured model, AI can use it to do useful work. Which produces more structured output. Which becomes documentation. Which feeds back into AI.
The circle is: humans design, AI documents, models enable, AI reasons, humans design better.
That's what the Miro round-trip showed me. You design in Miro against your real architecture. You export. AI writes the docs. The docs become the model. The model feeds the next design session. Nobody had to open a text editor and write markdown. The documentation happened as a side effect of doing real work.
I think we're just at the beginning of this. The tooling is early. The patterns are still forming. But I genuinely believe that structured documentation, the kind where you model your architecture rather than just describe it, is going to become one of the most valuable assets an engineering organization can have. Not because people will read it more. Because machines will use it in ways we're only starting to imagine.
If you're documenting your architecture today, even if it feels like thankless work, you might be building something far more valuable than you realize. The docs you write today could be the interface your AI tools reason about tomorrow.
Something to think about.

