Core idea
LLM Wiki Guide
A clear guide to the Karpathy Wiki pattern: what an LLM wiki is, what it owns, and why it compounds better than ad hoc note piles.
Why this page matters
The central move is simple: stop treating the model as a one-shot retriever and start treating it as the maintainer of a living wiki.
What the pattern changes
A typical document workflow asks the model to rediscover the right fragments every time you ask a new question. The LLM wiki pattern inserts a maintained markdown layer in the middle. That layer already contains the summaries, cross-links, and evolving synthesis.
This makes the wiki the working memory of the system. You are no longer querying a pile of documents directly. You are querying a living set of pages that has already absorbed what the model learned from earlier ingestion and earlier analysis.
What the model owns
In Karpathy's framing, the model owns the wiki itself. It creates pages, edits them when new evidence arrives, strengthens or revises prior claims, and keeps the index and log up to date.
That ownership matters because the painful part of knowledge work is usually maintenance, not reading. Humans are good at choosing sources and steering the investigation. Models are good at touching ten files, reconciling wording, and filing the results without getting bored.
- Create summaries from new sources.
- Update concept pages, entity pages, and comparisons.
- Track where newer evidence changes earlier conclusions.
- File useful answers back into the wiki instead of losing them in chat.
Where the pattern is useful
The pattern is useful anywhere knowledge accumulates over time: personal research, book reading, due diligence, internal team memory, course notes, trip planning, or any deep-dive where raw documents quickly become unmanageable.
The key fit test is whether you need compounding structure. If you only need one answer from a small corpus, plain retrieval is enough. If you want a knowledge base that gets sharper every week, the wiki pattern is more appropriate.