An LLM Wiki is a deliberately structured, human-readable, machine-consumable knowledge system designed for both people and AI agents. It preserves durable organizational knowledge, decisions, policies, product context, operating procedures, domain terms, architecture, history, and trusted references in a format that LLMs can safely read, cite, and help maintain.
Not just a wiki an LLM can read
A normal wiki might be understandable to a model, but an LLM Wiki is intentionally designed so an agent can answer: what is authoritative, who owns it, when it was reviewed, what is uncertain, which related pages matter, and which actions require human approval.
Core Definition
| Property | What it means in practice |
|---|---|
| Human-readable | Markdown or equivalent prose that a new teammate can inspect without a special retrieval system. |
| Machine-consumable | Stable paths, metadata, headings, ownership, trust labels, and related links that an agent can parse. |
| Durable | Useful knowledge persists beyond a chat session, ticket thread, or single project handoff. |
| Citable | Important claims point to wiki pages, source summaries, raw evidence, or external canonical sources. |
| Reviewable | Drafts, reviewed pages, stale pages, contradictions, proposals, and deprecated pages look different. |
| Permission-aware | Agents can see what they may read, summarize, propose, edit, or must leave to humans. |
What A Normal Wiki Often Lacks
A traditional wiki is usually optimized for human browsing. It may have stale pages, implicit ownership, weak source trails, uneven headings, duplicated decisions, and pages that sound authoritative even when they are only historical. An LLM Wiki fixes those failure modes by making status, provenance, ownership, uncertainty, and agent permissions first-class.
Good Page Test
- The page states its purpose and owner near the top.
- The page includes status, sensitivity, trust level, last reviewed date, and review cycle.
- Facts, assumptions, decisions, proposals, and open questions are separated.
- Related pages are linked intentionally instead of left to search.
- Agent guidance says what an AI may do and what requires approval.
Read Next
Why LLM Wikis
Problems solved: context loss, stale docs, unsafe AI use, onboarding drag, and decision amnesia.
How To Build
A practical implementation sequence from repository choice to review workflows.
LLM Wiki vs AI Memory
Durable knowledge base versus portable task context.