Skip to content

LlmWikis knowledge page

LLM Wiki for Agentic Orchestration

Agentic systems work best with an LLM Wiki when the wiki is treated as governed source memory around the run. The orchestrator schedules work, tools execute actions, and the LLM Wiki tells agents what is authoritative, stale, sensitive, contradictory, or ready for review.

Support boundary

This guide is a working pattern for agentic agents and orchestration layers. LlmWikis does not currently ship a public MCP server, A2A implementation, write API, trace exporter, live eval integration, managed agent runtime, SDK, CLI, certification, or official adapter.

Role split

Layer What it owns What the LLM Wiki supplies
Agentic orchestrator Task planning, agent routing, tool selection, retries, interruptions, and run state. Reviewed goals, constraints, source authority, known risks, and the smallest useful reading path.
Worker agents Research, drafting, code changes, analysis, extraction, or verification tasks. Page citations, ownership labels, update permissions, stale warnings, contradictions, and source traces.
Tool and protocol layer Local tools, retrieval, APIs, MCP-style access, A2A-style coordination, observability, and eval harnesses when implemented elsewhere. Curated pages and metadata that tools can read; review rules for anything they propose to change.
Human reviewers Approval, publication, policy, security, privacy, legal, production, and support-claim decisions. Proposed diffs, evidence links, run notes, unresolved questions, and log entries.

How to work before the run

  1. Choose the task boundary. Name whether the run is answering, drafting, changing code, preparing a release, auditing, or proposing wiki updates.
  2. Route through the wiki index. Read README, INDEX, trust model, agent instructions, safety boundaries, and only the domain pages needed for the task.
  3. Extract a compact run packet. Give the orchestrator current facts, constraints, owners, forbidden actions, source links, and review gates instead of the entire wiki.
  4. Declare write permissions. Decide whether agents may read only, draft proposals, edit staged files, or touch reviewed pages after human approval.
  5. Name evidence outputs. Decide where proposed changes, tool outputs, test results, trace links, eval notes, and unresolved questions should land.

Task packet shape

Use one compact task packet per orchestrated run. The starter bundle includes llm-wiki/agent/TASK_PACKET_TEMPLATE.md so operators can hand agents the current task without loading the whole wiki or hiding approval boundaries.

Packet section What it should name Why it matters
Task boundary Requested outcome, in scope, out of scope, stop conditions. Prevents agents from expanding a run into unrelated edits or unsafe actions.
Required reading Wiki pages, source records, external canonical sources, and pages not needed. Keeps retrieval small and auditable.
Permissions Read-only, proposal-only, staged-write, or human-approved write mode. Makes update authority explicit before tools execute.
Context packet Current facts, constraints, owners, sensitive-data boundaries, contradictions, and stale claims. Gives the orchestrator useful memory without turning old context into authority.
Evidence destinations Run notes, proposed updates, tool outputs, trace or evaluation records, review queue, and roadmap/open-question paths. Lets useful results survive the run without automatic publication.

How to work during the run

  • Keep working notes separate from reviewed wiki pages until promotion is approved.
  • Cite wiki pages, raw sources, and external canonical sources for important claims.
  • Preserve contradictions, stale claims, missing owners, and low-confidence results as visible task state.
  • Use orchestration traces and tool outputs as evidence inputs, not as automatic public truth.
  • Stop before secrets, private transcripts, regulated data, production actions, destructive operations, or unsupported support claims.

How to work after the run

Run output First destination Promotion rule
Reusable answer or synthesis Staged wiki draft or source summary Promote after source, contradiction, owner, and freshness checks.
Code or docs change Repository diff plus run notes Promote after targeted checks, review, and any required release note or roadmap update.
Trace, eval, or tool output Evidence log or raw source folder Summarize into the wiki only when the result is stable, privacy-safe, and useful beyond the run.
New task or gap Review queue, roadmap, or open questions page Keep planned, candidate, or blocked labels until implementation evidence exists.

Support and escalation

Agentic support starts when a run should stop, narrow, or ask for review instead of continuing autonomously. The starter bundle includes llm-wiki/agent/SUPPORT_ESCALATION_CHECKLIST.md for conflicts, stale or source-needed claims, private data, tool failures, missing tests, and unsupported capability requests.

Trigger Agent action Reviewer action
Conflicting or stale authority Preserve both claims with source paths, mark the answer blocked or provisional, and avoid fake consensus. Choose the winning source, update supersession notes, or create an open question with an owner.
Private or sensitive data Stop, minimize exposed detail, keep notes in approved evidence locations, and ask for the right owner. Decide whether the material can be summarized, redacted, retained privately, or excluded from the wiki.
Tool failure or missing checks Record command, output, environment, retry state, and affected artifact; do not claim verification passed. Approve a retry, change the task packet, or accept a blocked result with follow-up work.
Unsupported public capability claim Keep the language planned, candidate, or blocked, and cite the support boundary. Promote only after implementation evidence, public docs, and targeted checks exist.

Copy-ready orchestration prompt

You are using an LLM Wiki as governed source memory, not as scratch space.
1. Read README.md, INDEX.md, TRUST_MODEL.md, agent/AGENT_INSTRUCTIONS.md, agent/RETRIEVAL_GUIDE.md, and agent/SAFETY_BOUNDARIES.md.
2. Build a compact task packet from only the pages needed for this run.
3. Treat runtime traces, tool outputs, and agent messages as evidence inputs until reviewed.
4. Stage proposed wiki changes with source links, owner, review status, contradictions, and checks.
5. Do not claim public MCP, A2A, write API, eval, adapter, certification, or managed-service support unless the project has current public evidence.
6. If blocked, use SUPPORT_ESCALATION_CHECKLIST.md and stop before unsafe or unsupported work.
7. End with pages read, artifacts changed, evidence produced, review questions, and checks run.