Skip to content

LlmWikis knowledge page

LLM Wiki, AI Memory, and Project Handoff

Use LLM Wiki, AI Memory, and Project Handoff as different layers in the same governed knowledge system. The LLM Wiki is the durable reviewed base. AI Memory is the compact portable operating context. Project Handoff is the transfer packet that lets a person, team, vendor, or agent take over work.

Layer map

Layer Primary job Typical content Authority boundary
LLM Wiki Durable, reviewed organizational memory. Compiled pages, source links, trust labels, metadata, decisions, runbooks, indexes, logs. Public or internal knowledge only after review; raw sources stay separate.
UAI AI Memory Compact portable context packet for a bounded task or operating situation. Current constraints, facts, preferences, active state, and pointers to deeper sources. Canonical support and formats belong to UAIX.
UAI Project Handoff Transfer and takeover packet for a repository, project, team, or vendor boundary. AGENTS.md, readme.human, typed .uai records, active intake, decisions, progress, checks. Not an agent runtime, scheduler, or permission to bypass repository rules.
Execution agents Do the work through the governed local workflow. Codex, Cursor, Claude Code, Gemini, Copilot, local agents, and human maintainers. Agents execute and propose; they do not become source authority by being fluent.

Routing rule

  1. Put durable knowledge in the LLM Wiki. Store reviewed concepts, runbooks, policy, architecture, decisions, source summaries, contradictions, and logs where they can be maintained.
  2. Put current operating context in AI Memory. Keep only the facts, constraints, active task state, and source pointers needed for the bounded job.
  3. Put transfer context in Project Handoff. Give the next worker enough local state to resume without private chat history.
  4. Move old detail cold. Archive bulky history in the LLM Wiki or a cold memory system with hashes, summaries, dates, and retrieval pointers.
  5. Promote only through review. A handoff note becomes durable public knowledge only after source, privacy, authority, and verification checks.

Single-site and multisite setup

A single site or one codebase can start from AGENTS.md, readme.human, and local .uai files. A multisite editor workspace needs one extra file: workspace.uai beside the Visual Studio Code workspace file. Each site AGENTS.md should point to it, then the agent resolves the explicit site, domain, route, or path before loading any site-local memory.

Workspace shape First read Memory rule
Single site or codebase Local AGENTS.md, then readme.human and local .uai. Load the local hot handoff bundle for the current task.
Multiple sites in one editor workspace workspace.uai, then the selected site’s AGENTS.md. Explicit human domain, route, or path wins over the current shell directory; load only the selected site’s hot memory.
Shared LLM Wiki or AIWikis archive Selected source-site memory first, then the narrow archive or wiki record needed. Preserve source site, source path, destination path, disposition, checksum, review state, trust label, and promotion status.

The root index should match the workspace shape. For one codebase, wiki/index.md may be the all-files catalog. For a multisite or multi-project LLM Wiki, the root index should list sub-wiki directories, links to each sub-wiki index, and global-only files such as coding standards, organization policy, governance, source maps, and workspace.uai. Put each site’s all-files list inside that site’s sub-wiki index instead of flattening every file into the root wiki index.

Do not load every sibling site’s .uai files just because the workspace contains them. Cross-site memory belongs in the conversation only when the task explicitly asks for source routing, authority comparison, package coordination, or archive preservation.

Multi-repo Git preflight

Dogfooding this pattern in a Visual Studio workspace with LLMWikis, UAIX, AIWikis, and JustAnIota showed a second rule: target routing is not enough. Before Sync, pull, merge, commit, or push, check every repository named by workspace.uai separately. A clean status in the current shell repository does not prove that a sibling repository is clean.

Check Why it matters Stop condition
Branch tracking and ahead/behind state Visual Studio may show several repositories while the terminal is inside only one. A repo is diverged and the merge preview has real content conflicts.
.git/MERGE_HEAD and unmerged index entries An unfinished merge can block every later sync even when the working tree says there are no unstaged changes. Unmerged text or binary paths remain, or the owner has not approved the merge outcome.
Tracked generated/runtime files Mutable runtime files can create binary conflicts that the source workflow should not try to merge. Files such as wp-content/database/.ht.sqlite, *.sqlite, *-wal, or *-shm are tracked.

For WordPress Studio sites, keep required runtime support files such as wp-content/db.php, the SQLite must-use plugin, wp-content/database/.htaccess, and wp-content/database/index.php. Do not version the mutable SQLite database itself. Add ignore rules, remove the database from Git tracking with cached-only removal, and leave the local database file on disk.

If a merge conflict is already resolved and the index has no unmerged entries, conclude the merge with a normal merge commit. If real content conflicts remain, stop at the owning source site and ask for review instead of using shared wiki memory to decide the code outcome.

Promotion examples

Input First destination Promote to durable wiki when
New vendor onboarding packet Project Handoff The stable process, constraints, and source links have been reviewed.
Task-specific agent memory AI Memory A fact or decision will matter beyond the current task.
Old chat transcript Raw source or cold archive A reviewed summary can be written without leaking private details.
Contradictory research reports Improvement intake The contradiction is preserved and a reviewer chooses the public claim boundary.

Do not blur the layers

LLM Wiki pages are not raw dumps. AI Memory is not a full institutional archive. Project Handoff is not automatic publication. Execution agents are not authority sources. UAI-1 validation can support evidence, but it is not certification or endorsement.