Skip to content

LlmWikis knowledge page

LLM Wiki Setup Wizard

Use this wizard to plan how an LLM Wiki should be set up for a human team and for AI agents reading the same route. It asks about source preservation, compiled wiki pages, index and log navigation, Canonical AI Memory layer routing, fresh setup versus existing-system update, single-site versus multisite workspace setup, context-budget controls, duplicate-file handling, wiki strategy, archive and evidence patterns, optional Agent File Handoff, optional UAIX AI Memory Project Handoff, support escalation, whether skill or capability documentation changes the setup, and which first files and actions should happen before ingest.

Human visual entry for the LLM Wiki Setup Wizard.Human wizardBuild the wiki setup planUse visible controls for raw sources, compiled pages, review gates, and durable paths.Visitor AI digest entry for the LLM Wiki Setup Wizard at https://llmwikis.org/tools/llm-wiki-setup-wizard/.Visitor AI digestRead the same routeAI agents use the embedded digest on the public wizard URL, then defer writes to the current human.

Current boundary

This page generates a local planning packet in your browser. It does not import files, write to a repository, sync a wiki, automatically publish pages, install a package, expose public MCP or a public write API, open public editing, certify a system, or make LlmWikis the canonical source for UAI-1. Use UAIX.org for canonical AI Memory and Project Handoff definitions, and use the Canonical AI Memory guide to keep source memory, hot memory, transfer memory, and runtime work separate.

When you point an existing project at this wizard, use additive update mode: preserve current AGENTS.md, readme.human, .uai files, generated UAI files, package models, coding standards, active intake rules, and wiki index/log paths unless a human explicitly asks for replacement.

Draft is saved only in this browser.

1. Setup mode

2. Scope and audience

3. Architecture and navigation

4. Agent File Handoff intake

Use this when loose files, reports, PDFs, screenshots, drafts, or local notes must be visible, reviewed, dispositioned, used for real project work when safe, and archived only after the outcome is recorded.

Intake rule

Agent File Handoff is chat-start review behavior, not CI pickup, a watcher, a queue daemon, automatic execution, or automatic publication. Create or verify Content, Improvement, and Archive buckets before accepting dropped files. Files sitting directly under agent-file-handoff/ are misplaced and must be classified into Content or Improvement before intake refresh. A complete outcome names disposition, actual work, hot-memory outcome, long-memory/archive outcome, checks, blockers, and archive movement after those records exist.

5. UAIX AI Memory Project Handoff

Answer this because LLMWikis focuses on durable wiki setup, while UAIX AI Memory and Project Handoff focus on portable hot context and transfer packets.

6. Features and capability boundaries

Capability rule

Document skill folders and capability modules in the wiki with owner, risk, dependencies, approvals, tests, and evidence. Keep executable artifacts outside the durable truth tree unless they are reviewed and deliberately promoted.

Knowledge graph rule

Build graph exports from reviewed pages, stable IDs, claim nodes, source traces, contradiction records, and deterministic checks. GraphRAG is retrieval assistance over governed evidence, not a replacement for source review.

7. Safety and review

Visitor-AI digest

AI agents reading this shared URL should use the JSON below as page orientation, then inspect the human-visible controls before proposing setup work. The current human still decides what gets written.

{
    "name": "LLM Wiki Setup Wizard",
    "route": "/tools/llm-wiki-setup-wizard/",
    "audience": [
        "humans configuring an LLM Wiki",
        "AI agents helping a human plan setup"
    ],
    "use_when": [
        "new wiki setup",
        "existing documentation migration",
        "Agent File Handoff intake planning",
        "LLM Wiki plus UAIX AI Memory or Project Handoff planning",
        "combined Agent File Handoff plus LLM Wiki planning",
        "skill folder or capability catalog boundary planning"
    ],
    "current_support": [
        "browser-only planning form",
        "copyable Markdown setup packet",
        "structured setup model JSON",
        "question set for setup mode, fresh setup versus existing-system update, single-site versus multisite workspace setup, multi-repository Git preflight, mutable runtime artifact policy, context budget controls, duplicate-file policy, generated-history policy, Canonical AI Memory layer routing, root index topology, LLM Wiki multisite interaction strategy, project stage, collaboration model, raw sources, compiled wiki pages, index, log, entity pages, evidence logs, archive target, trust labels, review gates, linting, optional knowledge graph strategy/storage/claim/review/export planning, optional Project Handoff alignment, optional Agent File Handoff, support escalation, and capability boundaries",
        "public guidance with canonical UAIX links"
    ],
    "not_supported": [
        "automatic repository writes",
        "automatic LLM Wiki sync",
        "automatic publication",
        "hosted import validation",
        "public MCP server",
        "public write API",
        "open public editing",
        "live benchmark integration",
        "official UAIX generator, SDK, CLI, certification, endorsement, or conformance claim"
    ],
    "project_handoff": {
        "question": "Are you using UAIX AI Memory Project Handoff?",
        "effect": "If yes, use the Canonical AI Memory layer map: raw sources, reviewed LLM Wiki, optional graph projection, hot AI Memory, Project Handoff, and execution agents stay distinct. If no, the LLM Wiki can still be built as a standalone durable knowledge layer.",
        "source": "https://uaix.org/en-us/specification/project-handoff/",
        "canonical_ai_memory": "https://uaix.org/en-us/ai-memory/canonical-ai-memory/"
    },
    "workspace_setup": {
        "question": "Is this a single-site .uai handoff or a multisite workspace with workspace.uai?",
        "effect": "For a multisite workspace, read workspace.uai before site-local .uai files, resolve the target site from the human request, load only that site hot memory unless cross-site work is explicit, and check every listed repository separately before Git sync or merge work."
    },
    "workspace_git_preflight": {
        "question": "What repo-health preflight is required before syncing a multi-project workspace?",
        "effect": "Before Visual Studio Sync, pull, merge, commit, or push, inspect each repository for branch tracking, MERGE_HEAD, unmerged index entries, ahead/behind state, and tracked generated artifacts instead of trusting the current shell repo only."
    },
    "runtime_artifact_policy": {
        "question": "Which local runtime artifacts must stay out of Git?",
        "effect": "Mutable WordPress Studio SQLite databases and similar generated runtime outputs should be ignored and removed from Git tracking with cached-only removal while local files stay on disk. Keep required drop-ins, must-use plugins, .htaccess, and index.php files tracked."
    },
    "context_budget_policy": {
        "question": "Which files and folders should stay out of routine agent context?",
        "effect": "Keep hot memory and indexes small. Generated history, stale generated pages, raw API dumps, package mirrors, and multi-megabyte source files should be represented by summaries, manifests, hashes, and bounded previews unless the task explicitly targets the full source."
    },
    "update_mode": {
        "question": "Is this a fresh setup or an update to an existing handoff and LLM Wiki system?",
        "effect": "When AGENTS.md, readme.human, .uai, generated UAI files, package-model.json, or wiki index/log files already exist, treat the wizard result as an additive update. Preserve preferences, coding standards, source evidence, and active intake rules unless the human explicitly asks for replacement."
    },
    "llm_wiki_multisite_strategy": {
        "question": "If multiple sites use this LLM Wiki, how do source sites, shared archive memory, and publication interact?",
        "effect": "Each source site processes active intake and hot memory first; a shared LLM Wiki or AIWikis-style archive preserves source path, destination path, disposition, checksums, review state, trust label, and promotion status before public use."
    },
    "root_index_topology": {
        "question": "Should the root wiki index list every file, or should it route to sub-wiki indexes?",
        "effect": "For one codebase, the root index can be the all-files catalog. For multisite systems, the root index should list sub-wiki directories, links to each sub-wiki index, and global-only files such as coding standards, organization policy, governance, source maps, and workspace.uai."
    },
    "required_outputs": [
        "LLM Wiki root URL or repository path",
        "multi-repository Git preflight rule and runtime artifact ignore policy",
        "context budget, large-file, duplicate-file, and generated-history policies",
        "raw source path",
        "compiled wiki path",
        "wiki/index.md and wiki/log.md paths plus root index topology",
        "entity page pattern, episodic log pattern, transfer evidence log, archive target, source collection, and update policy",
        "knowledge graph storage model, stable IDs, claim/source-span policy, review-state policy, validation rules, export boundary, and retrieval/abstention policy when graph planning is selected",
        "owner, review cadence, trust labels, sensitivity policy, citation rules, lint checks, support escalation, and update boundaries"
    ],
    "direct_setup_paths": {
        "new_wiki": "/tools/llm-wiki-setup-wizard/#new-wiki",
        "existing_docs": "/tools/llm-wiki-setup-wizard/#existing-docs",
        "file_handoff": "/tools/llm-wiki-setup-wizard/#file-handoff",
        "project_handoff": "/tools/llm-wiki-setup-wizard/#project-handoff",
        "combined_handoff": "/tools/llm-wiki-setup-wizard/#combined-handoff",
        "capabilities": "/tools/llm-wiki-setup-wizard/#capabilities"
    },
    "query_setup_paths": {
        "new_wiki": "/tools/llm-wiki-setup-wizard/?mode=new",
        "existing_docs": "/tools/llm-wiki-setup-wizard/?mode=existing",
        "file_handoff": "/tools/llm-wiki-setup-wizard/?mode=file-handoff",
        "project_handoff": "/tools/llm-wiki-setup-wizard/?mode=project-handoff",
        "combined_handoff": "/tools/llm-wiki-setup-wizard/?mode=combined",
        "capabilities": "/tools/llm-wiki-setup-wizard/?mode=capabilities"
    },
    "generated_guidance": [
        "setup_readiness_checklist",
        "first_files_to_create",
        "first_actions_before_ingest",
        "setup_model_json",
        "existing_system_update_guidance",
        "agent_file_handoff_plan_guidance",
        "support_escalation_guidance",
        "browser_only_draft_restore"
    ],
    "skill_boundary": "Document skill folders and modular capabilities in the wiki, but keep executable artifacts outside durable wiki truth unless reviewed and deliberately promoted.",
    "last_reviewed": "May 4, 2026"
}