Every AI session starts from zero. The context window resets. The learnings from yesterday's session evaporate. You explain the same conventions, hit the same edge cases, make the same corrections. Again.
This is the knowledge problem of AI-assisted development.
We'd been using AI coding assistants for months. The productivity gains were real — faster prototyping, less boilerplate, better autocomplete. But something was wrong.
Component #1 was sharp. Component #10 was still pretty good. By component #30, quality had drifted. The same mistakes kept appearing. Conventions we'd established were forgotten. Patterns we'd refined were ignored.
We were teaching the AI the same lessons over and over, in session after session, with no memory carrying forward. Every context window reset was a clean slate — and not in a good way.
"The best time to document a lesson is the moment you learn it. The second best time is never — because you won't remember."
What Doesn't Work
We tried the obvious solutions. None of them stuck.
Documentation wikis
Written once, rarely updated, never read at the moment of need. The knowledge exists but isn't in the context window when it matters.
Post-mortems
Only happen after disasters. By definition, they miss the small lessons — the ones that compound into quality over time.
Style guides
Static documents that show what good looks like but don't capture why decisions were made or what went wrong before.
Iterative loops
Self-improvement cycles that refine output within a session. Powerful for the task at hand — but the refinements vanish when the session ends.
The pattern was clear: knowledge that lives outside the context window might as well not exist. And knowledge that only lives inside the current session is gone tomorrow.
The System
We built something different. A knowledge system designed for AI-assisted development, where context windows are finite and sessions are stateless.
The core idea: after every significant task, ask one question.
"What rule would have prevented this, or what pattern should we repeat?"
If the answer is useful, document it. Not in a wiki. Not in a post-mortem months later. Right now, in a format that gets loaded into every future session.
Two Tiers
Global Learnings
Cross-cutting knowledge that applies everywhere. Loaded into every session automatically.
- • Platform constraints
- • Dependency gotchas
- • Architectural patterns
- • Team conventions
Feature Learnings
Specific to one feature or flow. Loaded only when working in that area.
- • Design decisions
- • Edge cases discovered
- • Integration specifics
- • Business logic rules
Global learnings are always present — they're the team's accumulated wisdom. Feature learnings are contextual — they appear when relevant and stay out of the way otherwise.
The Format
Learnings need structure. Prose documentation is hard to scan and harder for AI to apply. We use a consistent format:
## [Category]: [Specific Rule]
**Context**: What situation revealed this?
**Rule**: What should always/never happen?
**Why**: What breaks if ignored?
Example:
- ❌ Bad: [what not to do]
- ✅ Good: [what to do instead]The format is deliberate. Context explains when it applies. Rule states the action clearly. Why prevents cargo-culting*. Examples make it concrete and AI-applicable.
*Cargo-culting: copying practices without understanding why they work. The term comes from post-WWII South Pacific, where islanders built fake runways and wooden control towers — mimicking military bases — hoping cargo planes would return. They replicated the appearance without understanding the mechanism. In software: following a rule because "that's how it's done" without knowing what breaks if you don't.
Mistakes become rules. Rules prevent mistakes. The system gets smarter with every task.
— The flywheel
The Discipline
The system only works with discipline. Not "document learnings when you remember" but "document learnings after every significant task."
- Bug fixed → "What rule prevents this?"
- Rollback → "What constraint to document?"
- Feedback received → "What convention to capture?"
- Pattern discovered → "Will this help next time?"
- Obvious things AI already knows
- One-time fixes unlikely to recur
- Speculative rules not yet validated
- Things that change frequently
There's also pruning. Knowledge bases need maintenance. If a rule hasn't been useful in three months, question whether it belongs. Dead rules are noise.
The Results
After three months with this system, the difference was measurable. New sessions started with accumulated wisdom instead of a blank slate. Edge cases we'd hit once were documented and avoided. Conventions stuck because they were in the context, not in someone's memory.
But the real result was qualitative. The AI felt like a team member who remembered things. Who learned from mistakes. Who got better over time. Because in a sense, it did — through us.
The Flywheel
More documented learnings
↓
Better AI outputs (learnings in context)
↓
Fewer bugs and corrections
↓
More time for new features
↓
More opportunities to learn
↓
More documented learnings
This is compound interest for knowledge. Each learning makes the next task slightly easier. Over weeks and months, "slightly easier" becomes "dramatically better."
The Lesson
AI assistants are stateless. That's not a bug to work around — it's a constraint to design for. The question isn't "how do we make the AI remember?" It's "how do we make remembering part of the system?"
The answer: capture knowledge in a format that fits the context window, load it automatically, and maintain it actively. Make the AI's "memory" explicit and external — a knowledge base that grows with every task.
Every unit of work should make the next unit easier. That's the compounding effect.