Skip to content

LlmWikis knowledge page

For AI Agents

AI agents should treat an LLM Wiki as a governed knowledge system, not as a pile of text to rewrite. The agent’s job is to read the right pages, cite them, preserve uncertainty, propose safe updates, and stop at human approval boundaries.

Agent Reading Order

  1. Read README for scope and first path.
  2. Read INDEX to choose the smallest useful page set.
  3. Read TRUST_MODEL before interpreting status labels.
  4. Read GOVERNANCE, AGENT_INSTRUCTIONS, RETRIEVAL_GUIDE, UPDATE_RULES, CITATION_RULES, and SAFETY_BOUNDARIES before editing.
  5. Read domain pages only after routing through the index.

Agent Rules

  • Cite wiki pages and source records for important claims.
  • Identify authoritative pages, stale pages, conflicting pages, and drafts.
  • Distinguish facts, assumptions, decisions, proposals, and open questions.
  • Suggest edits rather than silently rewriting authoritative policy.
  • Preserve citations and provenance.
  • Do not add secrets or infer permissions.
  • Do not treat drafts as approved policy.
  • Do not collapse disagreement into a fake consensus.
  • Flag stale pages and missing owners when possible.
  • Ask for human review for policy, security, privacy, legal, production, architecture, or sensitive changes.

Copy-Ready Agent Instruction Starter

Before answering or editing:
1. Read README.md, INDEX.md, TRUST_MODEL.md, and agent/SAFETY_BOUNDARIES.md.
2. Use the smallest page set that can answer the task.
3. Cite local wiki paths for important claims.
4. Preserve uncertainty, open questions, and contradictions.
5. Do not edit authoritative policy, security, privacy, legal, production, or architecture pages without human approval.
6. Do not add secrets, credentials, private keys, raw customer data, or regulated data.
7. End with pages read, changes proposed or made, citations used, and checks run.