May 10, 2026
The LLM Wiki: Give Any AI Agent an Infinite Memory
Andrej Karpathy posted this concept and it went viral. Here's what it actually means in practice - and how to build one.
Andrej Karpathy - ex-OpenAI founding team, Tesla's head of AI - posted about this on X a few months ago. The idea went viral almost immediately. He called it the "LLM wiki."
Once you understand it, you can't go back to how you used to work with agents.
What He Was Describing
The standard way most people use AI with documents is some form of RAG. Upload files, the model retrieves relevant chunks at query time, generates an answer. It works. But the model re-derives everything from scratch on every question. Nothing builds up. Nothing compounds.
The LLM wiki flips that. Instead of retrieval at query time, the AI incrementally builds a persistent, structured wiki - a folder of interlinked markdown files sitting between you and your raw sources. When you add a new source - a video transcript, a web article, a workshop debrief, a meeting note - the AI doesn't just index it. It reads it, extracts what matters, and integrates it into the existing structure: updating concept pages, cross-linking entities, noting where new information challenges old conclusions.
The wiki keeps getting richer. The next query starts from everything you've already accumulated - not from zero.
Karpathy put the division simply: you curate sources and ask good questions. The LLM does all the bookkeeping. That split is the whole point.
Why This Works When Other Wikis Don't
Humans abandon wikis because the maintenance burden grows faster than the value. Updating cross-references, keeping summaries current, noting when a new source contradicts an old one - nobody sustains that for more than a few weeks.
LLMs don't get bored. They don't forget to update the index. They can touch 15 interlinked files in one pass without losing track. The maintenance cost drops to near zero. The wiki stays alive because it has the right maintainer.
How We Built It Here
In this workspace, the wiki/ folder is a live implementation of this pattern. The structure:
sources/- raw ingested content (YouTube transcripts, web articles, local briefs). Immutable - the agent reads but never modifies these.learning/- processed knowledge pages, one per topic or videocontent/- content strategy, LinkedIn research, audience analysisoperations/- past workshops, campaigns, webinars. One page per operation with key decisions and outcomes.people/- entity pages (Karpathy, Anthropic, Google) that grow as they appear across different sourcesindex.md- the master catalog the agent reads first on every querylog.md- append-only record of every ingest and query
When I drop a YouTube URL, the agent runs the transcription pipeline, saves the raw transcript to sources/, reads the index to understand what already exists, then writes or updates the relevant knowledge pages and cross-links everything. A single source can touch 8-10 pages in one pass.
Over time, the wiki builds a connected picture. The agent knows who Karpathy is, what he's said across multiple videos, where his ideas appear in other people's work, and which concepts are well-established versus contested. It's not chat history. It's compiled knowledge.
This is the project-level wiki - specific to one workspace and its domain.
The Global Flavor
For memory that crosses projects, there's a second pattern: a global wiki at ~/.claude/wiki/.
This one captures decisions and session context rather than domain knowledge. What was built in a session and why. What pattern failed. What approach finally worked in production. Organized by project, indexed the same way. Any Claude session in any repo can query it.
The two serve different purposes. The project wiki builds domain knowledge over time. The global wiki builds institutional memory across everything you build.
Copy This to Your Agent
Save the content below as ~/.claude/commands/wiki-setup.md in your home folder - then tell Claude /wiki-setup and it builds the structure from scratch.
# Wiki Setup
Builds an LLM wiki from scratch - either inside a project (wiki/ folder) or globally (~/.claude/wiki/).
Creates the folder structure, index, log, schema file, and two operating commands.
---
## Step 0: Ask First
Before creating anything:
1. "Project-level wiki (wiki/ inside a repo) or global wiki (~/.claude/wiki/) — or both?"
2. "What domain or topics will this wiki cover?" (e.g. business knowledge, research, content, session memory)
Use the answers to set `[BASE]` path and determine what categories make sense.
---
## Step 1: Create Folder Structure
```bash
# Project-level:
mkdir -p wiki/sources/youtube wiki/sources/web wiki/sources/local
mkdir -p wiki/learning wiki/content wiki/people wiki/operations
# Global (session memory):
mkdir -p ~/.claude/wiki/projects
```
---
## Step 2: Create index.md
Create `[BASE]/index.md`:
```markdown
# Wiki Index
Master catalog. Read this first on every query — then fetch only the files that match.
---
## Learning
*(empty — populate via /wiki-ingest)*
## Content
*(empty)*
## People
*(empty)*
## Operations
*(empty)*
---
*Last updated: YYYY-MM-DD | Total pages: 0*
```
---
## Step 3: Create log.md
Create `[BASE]/log.md`:
```markdown
# Wiki Log
Append-only record of all ingests, queries, and lint passes.
## Format
## [YYYY-MM-DD] ingest | Source Title
Source: <type> — <path>
Pages touched: <list>
Key insight: <one line>
```
---
## Step 4: Create the Schema File
Create `[BASE]/SCHEMA.md` — this is the agent's standing instructions for maintaining the wiki:
```markdown
# Wiki Schema
## Rules
1. Always read index.md FIRST before any query or ingest
2. Never create a duplicate entity page — find and update the existing one
3. Every page must have frontmatter: title, date, sources, related
4. Every ingest updates index.md and appends to log.md — no exceptions
5. Use bidirectional linking: when page A references page B, page B should reference A
6. Flag gaps explicitly: [MISSING] or [NEEDS RESEARCH]
7. One source per ingest — complete fully before moving to the next
## Page Format
---
title: "Page Title"
date: "YYYY-MM-DD"
sources: ["sources/type/slug.md"]
related: ["category/related-page.md"]
---
## Categories
- sources/ — raw content, never modified after saving
- learning/ — processed knowledge, one page per topic
- content/ — content strategy and audience knowledge
- people/ — entity pages for people and organizations
- operations/ — past runs: workshops, campaigns, projects
```
---
## Step 5: Create /wiki-ingest
Save as `[BASE]/../.claude/commands/wiki-ingest.md` (or `~/.claude/commands/wiki-ingest.md` for global):
```markdown
# Wiki Ingest
Add a new source to the wiki. One source per run — complete fully before moving on.
## Process
1. Detect source type:
- YouTube URL → run transcription pipeline, save to sources/youtube/
- Web URL → fetch and convert to markdown, save to sources/web/
- Local .md file → read directly, save copy to sources/local/
2. Read wiki/index.md fully (mandatory before touching any page)
3. Read the source fully
4. Decide:
- Which category this belongs to
- Which new pages to create
- Which existing pages to update
5. Brief the user: "Found: [2-3 key takeaways]. I'll create [X] and update [Y]. OK?"
Wait for confirmation.
6. Write pages with frontmatter (title, date, sources, related)
Add cross-links in a ## Related section on every page
7. Update index.md — add one line per new page, update date on modified pages
8. Append to log.md
9. Confirm: "Done. Created: [X]. Updated: [Y]. Key insight: [one line]."
```
---
## Step 6: Create /wiki-query
Save as `.claude/commands/wiki-query.md`:
```markdown
# Wiki Query
Answer a question using the wiki as the source of truth.
## Process
1. Read wiki/index.md fully
2. Identify relevant pages from topics, tags, and category names
3. Read matched pages — not everything, just what's relevant
4. Synthesize an answer with citations to specific wiki pages
5. If the answer is valuable enough to keep: ask "Should I file this as a wiki page?"
6. Append query record to log.md
## Rules
- Read the actual pages, not just index summaries
- Never invent content not in the wiki — flag gaps explicitly
- A good answer compounds: file it back into the wiki when it's worth keeping
```
---
## Verification
Run `/wiki-ingest` with any source. Then run `/wiki-query` on a topic it covers.
If the query finds and cites the ingested content correctly — the system is working.
```