Nadege Pepin — Content Systems Engineer

Description vs. understanding

Technical content is not description. Description tells you what exists. Technical content builds the mental model that lets you work with it. Those are wildly different jobs, and confusing them is how documentation ends up verbose and useless.

The measure of technical content isn't coverage. It's whether a user or a model can act after reading it. Can they deploy the system? Diagnose the failure? Understand why the architecture is shaped the way it is? If not, the content didn't do its job, regardless of how accurate it is.


Structure is an argument

Every organizational decision in a document is a claim about how the reader thinks. A four-category taxonomy built around how operators encounter a problem is a different argument than one built around how an API is organized. A diagram layered by Kubernetes primitives is a different argument than one layered by feature boundaries. These choices determine whether the content transfers understanding or just transfers information.

Good information architecture doesn't announce itself. When it works, the reader moves through the content as if the structure were obvious — because it maps cleanly onto the mental model they're building. Getting there requires understanding the subject deeply enough to know what that model should be, and understanding the reader well enough to know where they're starting from.


Depth is not optional

Surface-level familiarity produces surface-level content. You can describe an API without understanding it. You cannot build the mental model that makes it usable without going deep into the failure modes, the edge cases, the architectural decisions that explain why things work the way they do.

This is why technical writing done well is an engineering discipline. The work is not translating what engineers know into simpler words. The work is acquiring the understanding independently, finding the structure that makes it transferable, and building the artifact that closes the gap between what the system does and what the reader needs to know to use it.


Be the first user

Embed early in engineering. Be the users' advocate. Highlight what's broken, illogical, cumbersome. Push to get the change in — reshape the API, fix the UI, surface missing integration paths. The writer is the first person who has to make whole features work end-to-end. I have been that writer. It is the only version of this job worth doing.


Documentation decay is technical debt

Pages drift from reality silently. No one files a ticket that says "this is now 60% accurate." New features always win. Maintenance is invisible work with few metrics and no launch. Left unmanaged, it compounds — it erodes user trust faster than any missing feature, and it pollutes a model's knowledge.

A new feature ships. Wait — what does that mean for the existing architecture? This entire section needs to be rewritten. Treating docs as a living system is the discipline. It is a hard problem.

AI helps surface possible impacted surface areas, patterns in user feedback and support tickets. But deep product knowledge is not optional — a model is only as good as the person using it. The gap between what shipped and what's documented is usually invisible until it isn't. It is the only honest answer to scale.


The future may need very few of us. But the ones it needs had better understand the product cold.

That future is now. LLMs are trained on content. RAG pipelines retrieve it. Agents execute against it. The quality of the structure — the clarity of scope, the precision of definitions, the logic of the sequence — determines how well users and machines alike grasp content. Time-to-understanding was always the right metric. Now it applies to every reader in the system.