{"id":2100,"date":"2026-05-08T16:57:22","date_gmt":"2026-05-08T07:57:22","guid":{"rendered":"https:\/\/www.aicritique.org\/us\/?p=2100"},"modified":"2026-05-08T16:57:25","modified_gmt":"2026-05-08T07:57:25","slug":"andrej-karpathys-latest-concept-llm-wiki-and-the-future-of-enterprise-knowledge","status":"publish","type":"post","link":"https:\/\/www.aicritique.org\/us\/2026\/05\/08\/andrej-karpathys-latest-concept-llm-wiki-and-the-future-of-enterprise-knowledge\/","title":{"rendered":"Andrej Karpathy\u2019s latest concept \u2018LLM Wiki\u2019 and the future of enterprise knowledge"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\" id=\"executive-summary\">Executive summary<\/h2>\n\n\n\n<p class=\"has-medium-font-size\">Andrej Karpathy\u2019s public GitHub Gist, published on April 4, 2026, describes\u00a0<strong>LLM Wiki<\/strong>\u00a0not as a finished product, but as an \u201cidea file\u201d for agentic knowledge work: instead of re-retrieving raw fragments on every question, an LLM incrementally compiles curated source material into a persistent, interlinked Markdown wiki, guided by a schema file such as\u00a0<code>CLAUDE.md<\/code>\u00a0or\u00a0<code>AGENTS.md<\/code>. In Karpathy\u2019s pattern, the core stack is\u00a0<strong>raw sources<\/strong>\u00a0as immutable truth, a\u00a0<strong>wiki<\/strong>\u00a0as the maintained knowledge layer, and a\u00a0<strong>schema<\/strong>\u00a0as the operating contract for ingest, query, and maintenance. The practical promise is compounding knowledge: each ingest and each good answer can strengthen the corpus rather than vanish into chat history.\u00a0(1)<\/p>\n\n\n\n<p class=\"has-medium-font-size\">The investor page from\u00a0treats that pattern as a transition point rather than an endpoint. Its\u00a0<strong>Self-Organizing Wiki<\/strong>\u00a0vision keeps original enterprise documents as the source of truth, treats the LLM Wiki as an AI-generated structured knowledge layer, and adds\u00a0<strong>ConceptMiner<\/strong>\u00a0plus a\u00a0<strong>ThinkNavi<\/strong>\u00a0interface to discover concept clusters, bridge concepts, structural analogies, and knowledge gaps. Read conservatively, this is a roadmap and market thesis, not proof of a fully shipped enterprise platform. Analytically, the strongest conclusion is that LLM Wiki is best seen as a\u00a0<strong>knowledge-compilation layer that complements retrieval<\/strong>, while Self-Organizing Wiki is an attempt to add\u00a0<strong>structural discovery and associative exploration<\/strong>\u00a0on top of that layer.\u00a0(1)<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-llm-wiki-is\">What LLM Wiki is<\/h2>\n\n\n\n<p class=\"has-medium-font-size\">Karpathy\u2019s core claim is simple: most document-centric LLM workflows still behave like retrieval-augmented generation, where the model finds relevant chunks at query time and reconstructs the answer from scratch each time. LLM Wiki shifts the heavy knowledge work earlier. A new source is not just indexed; it is read, summarized, linked into existing entity and concept pages, reconciled against prior claims, and recorded in a persistent wiki that the human primarily reads while the model primarily maintains. Karpathy\u2019s three-layer architecture is explicit:\u00a0<strong>raw sources<\/strong>\u00a0are immutable and remain the source of truth; the\u00a0<strong>wiki<\/strong>\u00a0is an LLM-generated Markdown layer composed of summaries, entity pages, concept pages, comparisons, overviews, and syntheses; the\u00a0<strong>schema<\/strong>\u00a0tells the agent how to structure the wiki and how to execute ingest, query, and maintenance workflows.\u00a0(1)<\/p>\n\n\n\n<p class=\"has-medium-font-size\">That architecture also defines a division of labor. Karpathy\u2019s formulation is that the human curates sources, explores, and asks the right questions, while the LLM performs the persistent bookkeeping: summarizing, cross-referencing, updating, and filing. He explicitly lists personal self-tracking, topic research, book companions, business\/team knowledge, and due diligence as candidate use cases, and he frames Markdown plus local browsing tools as the working environment in which the wiki becomes a durable artifact rather than a transient chat output.\u00a0(1)<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"how-it-differs-from-rag\">How it differs from RAG<\/h2>\n\n\n\n<p class=\"has-medium-font-size\">Conventional RAG, in official documentation, processes a knowledge base into searchable vector or other retrieval structures and, when a user asks a question, retrieves relevant passages and provides them to the model to ground the answer. LLM Wiki changes the timing of synthesis. The main distinction is therefore\u00a0<strong>ingest-time compilation versus query-time assembly<\/strong>. In RAG, freshness is relatively strong because new documents can be retrieved as soon as they enter the index. In LLM Wiki, coherence is stronger because cross-document synthesis can already exist before the question is asked, but freshness depends on how quickly the wiki is re-ingested and maintained.\u00a0(2)<\/p>\n\n\n\n<p class=\"has-medium-font-size\">This timing difference clarifies the comparison with nearby systems.\u00a0GraphRAG\u00a0still centers on an indexing pipeline plus a query engine: it extracts entities, relationships, claims, community summaries, vectors, and then answers questions over completed indexes through local, global, DRIFT, or basic search. That is more structured than baseline RAG, but still fundamentally query-oriented.\u00a0NotebookLM\u00a0remains source-grounded at answer time: its help pages emphasize inline citations, source selection, and grounded answers based on uploaded sources or static source copies. Projects in\u00a0ChatGPT\u00a0are different again: they are long-running workspaces that keep chats, files, instructions, memory, and tools together, but the official description does not position them as autonomous wiki compilers. So LLM Wiki is not simply \u201cbetter RAG\u201d; it is a different design target:\u00a0<strong>persistent knowledge artifacts first, retrieval second<\/strong>. The last clause is a synthesis from the cited materials and should be read as an interpretive comparison.\u00a0(3)<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"architecture-and-operations\">Architecture and operations<\/h2>\n\n\n\n<p class=\"has-medium-font-size\">Karpathy\u2019s public spec names three primary operations:\u00a0<strong>ingest<\/strong>,\u00a0<strong>query<\/strong>, and\u00a0<strong>lint<\/strong>. Ingest reads a new raw source, discusses key takeaways with the human if desired, writes a summary page, updates relevant entity and concept pages, updates\u00a0<code>index.md<\/code>, and appends an entry to\u00a0<code>log.md<\/code>; Karpathy notes that a single source can touch 10\u201315 pages. Query starts from the index, reads relevant pages, synthesizes an answer with citations, and can save that answer back into the wiki as a new page. Lint performs health checks for contradictions, stale claims, orphan pages, missing concept pages, missing cross-references, and data gaps worth filling with new searches.\u00a0<code>index.md<\/code>\u00a0is content-oriented navigation;\u00a0<code>log.md<\/code>\u00a0is chronological, append-only operational history. Karpathy also notes that this works \u201csurprisingly well\u201d at moderate scale and that the wiki itself can live as a Git-backed Markdown repository, which naturally gives version history and rollback.\u00a0(1)<\/p>\n\n\n\n<p class=\"has-medium-font-size\">What Karpathy\u2019s Gist\u00a0<strong>does not<\/strong>\u00a0specify is equally important. It does not define a canonical parser, normalization stack, chunking policy, citation schema, merge algorithm, or audit model. So when people now talk about ingest internals such as\u00a0<em>parsing<\/em>,\u00a0<em>normalization<\/em>,\u00a0<em>chunking<\/em>, and\u00a0<em>citation capture<\/em>, those are mostly\u00a0<strong>implementation details added by follow-on tools rather than fixed parts of Karpathy\u2019s public spec<\/strong>\u00a0(inference). Community implementations illustrate the gap. The\u00a0<code>llm-wiki-compiler<\/code>\u00a0project adds multimodal ingest, chunked retrieval with reranking, paragraph- and claim-level provenance, typed page kinds, candidate review queues, contradiction metadata, linting, and rollback-oriented roadmap items.\u00a0adds\u00a0<code>log.md<\/code>, structured JSON logs, and an append-only\u00a0<code>audit.db<\/code>, with source hashes, cost records, job history, cache invalidation on file hash change, and audit queries.\u00a0adds approval bundles, contradiction detection, provenance-tracked graph edges, context packs, and hybrid search.\u00a0makes the vault structure itself explicit with\u00a0<code>raw\/<\/code>,\u00a0<code>wiki\/<\/code>,\u00a0<code>outputs\/<\/code>, and\u00a0<code>SCHEMA.md<\/code>.\u00a0(5)<\/p>\n\n\n\n<p class=\"has-medium-font-size\">The agent workflow implied by the public pattern is concrete enough to describe at file level. A typical\u00a0<strong>Obsidian\u00a0+ agent<\/strong>\u00a0loop is: use Obsidian Web Clipper to capture a web page into Markdown; save article content and metadata into\u00a0<code>raw\/<\/code>; let the agent generate or revise\u00a0<code>wiki\/sources\/&lt;source>.md<\/code>,\u00a0<code>wiki\/concepts\/&lt;concept>.md<\/code>, and\u00a0<code>wiki\/entities\/&lt;entity>.md<\/code>; update\u00a0<code>wiki\/index.md<\/code>\u00a0and\u00a0<code>wiki\/log.md<\/code>; then inspect the graph and backlinks in Obsidian, where internal links are navigable and can auto-update when files are renamed. Karpathy explicitly describes browsing the results in real time with Obsidian open beside the agent. Obsidian\u2019s Web Clipper supports templates, variables, and Markdown extraction of page content, while Graph view visualizes node-link relationships inside the vault.(1)\u00a0<\/p>\n\n\n\n<p class=\"has-medium-font-size\">With\u00a0Claude Code, the wiki pattern maps naturally onto\u00a0<code>CLAUDE.md<\/code>. The docs say Claude Code reads and edits files, runs commands, uses persistent\u00a0<code>CLAUDE.md<\/code>\u00a0instructions and auto memory, and asks for permission before modifying files. A practical LLM Wiki arrangement is therefore to store page naming rules, provenance rules, and update procedures in\u00a0<code>CLAUDE.md<\/code>, then have the agent edit the relevant Markdown files and optionally run Git commands or lint tools. With\u00a0Codex, the equivalent control layer is\u00a0<code>AGENTS.md<\/code>: Codex reads those files before work, layers guidance from global to repo-local scope, can inspect repositories, edit files, run commands, and exposes action transcripts for review and Git-based rollback. An example such as \u201cingest\u00a0<code>raw\/meeting-2026-05-07.md<\/code>, update the decision page, revise\u00a0<code>index.md<\/code>, append\u00a0<code>log.md<\/code>, and show the diff\u201d is therefore highly plausible, but the exact file conventions are still\u00a0<strong>schema-dependent rather than standardized<\/strong>\u00a0(inference).\u00a0(6)<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"why-now-and-its-limits\">Why now and its limits<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"why-now\">Why now<\/h3>\n\n\n\n<p class=\"has-medium-font-size\">The concept is resonating now because agentic tools have become much better at cross-file reading, editing, command execution, and long-running project guidance. Karpathy\u2019s own public follow-up says the pattern becomes especially interesting once the wiki is large enough\u2014he gives an example of a research wiki with roughly 100 articles and 400,000 words. At the same time, open-source implementations and Hacker News threads appeared within days, which suggests the community sees the pattern as operationally buildable rather than merely rhetorical.\u00a0(7)<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"limits-and-enterprise-challenges\">Limits and enterprise challenges<\/h3>\n\n\n\n<p class=\"has-medium-font-size\">The major criticism is not that the idea is worthless, but that it can be oversold. In the Gist comments, one detailed critique argues that once the wiki exceeds modest size, retrieval, ranking, indexing, reranking, chunking, and access control all come back; another warns that when the same process both reads and writes the knowledge base, \u201csilent corruption\u201d becomes a real risk. Other comments debate whether \u201cwiki\u201d is even the right term for a static or agent-maintained Markdown corpus, though defenders argue the more serious question is not naming but whether the system has citations, provenance, permissions, auditability, and editorial controls. These are\u00a0<strong>community critiques<\/strong>, not Karpathy\u2019s own claims, but they identify the load-bearing risks.\u00a0(1)<\/p>\n\n\n\n<p class=\"has-medium-font-size\">For enterprise use, the mitigation pattern is fairly clear even if no single source yet defines a universal standard. A serious implementation should keep original sources immutable; attach paragraph- or claim-level provenance where possible; stage updates through approval queues rather than writing straight into authoritative pages; maintain append-only operation logs and source hashes; use Git or equivalent rollback; test update quality and contradiction handling; and separate access policy on raw sources from visibility of derived wiki pages. This mitigation bundle is\u00a0<strong>best-practice synthesis rather than a single-source prescription<\/strong>\u00a0(inference), but it follows directly from Karpathy\u2019s raw\/wiki\/schema split, his use of\u00a0<code>log.md<\/code>\u00a0and Git, the community implementations\u2019 review and audit features, and the investor-page emphasis on traceability, access rights, conflicting documents, and auditability.\u00a0(1)<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"self-organizing-wiki-as-the-next-evolution\">Self-Organizing Wiki as the next evolution<\/h2>\n\n\n\n<p class=\"has-medium-font-size\">The investor page from\u00a0explicitly frames a progression:\u00a0<strong>RAG retrieves documents; LLM Wiki compiles knowledge into a persistent wiki; Self-Organizing Wiki organizes that wiki into conceptual maps and associative trails<\/strong>. Its layered model is explicit:\u00a0<strong>Original enterprise sources<\/strong>\u00a0remain the source of truth, the\u00a0<strong>LLM Wiki layer<\/strong>\u00a0is AI-generated structured knowledge, the\u00a0<strong>ConceptMiner layer<\/strong>\u00a0is a self-organizing conceptual map, and the\u00a0<strong>ThinkNavi interface<\/strong>\u00a0provides exploration, dialogue, synthesis, and decision support. That is an important architectural move because it refuses to treat the AI-generated wiki as the final truth layer in enterprise settings.\u00a0(2)<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"877\" height=\"55\" src=\"https:\/\/www.aicritique.org\/us\/wp-content\/uploads\/2026\/05\/image.png\" alt=\"\" class=\"wp-image-2101\" srcset=\"https:\/\/www.aicritique.org\/us\/wp-content\/uploads\/2026\/05\/image.png 877w, https:\/\/www.aicritique.org\/us\/wp-content\/uploads\/2026\/05\/image-300x19.png 300w, https:\/\/www.aicritique.org\/us\/wp-content\/uploads\/2026\/05\/image-768x48.png 768w\" sizes=\"auto, (max-width: 877px) 100vw, 877px\" \/><\/figure>\n\n\n\n<pre class=\"wp-block-preformatted has-medium-font-size\">ConceptMiner is the most distinctive part of the proposal. According to the investor page, it takes chunks, wiki pages, and generated conceptual descriptions, embeds them, and organizes them with a GNG+MST-based conceptual structure network. The declared outputs are not just nearest-neighbor search results but concept clusters, semantic neighborhoods, bridge concepts, knowledge gaps, duplicated or fragmented concepts, structural changes over time, hidden relationships, and areas where new hypotheses may emerge. It also proposes multiple representational models\u2014Trigger\/Situation\/Motive, Logical Structure, Implication\/Lesson\u2014so that traversal can jump from topical similarity to structural analogy across domains. The page calls the end state \u201cEnterprise Associative Memory.\u201d This is conceptually ambitious, but it remains a public roadmap\/positioning document, not an audited technical benchmark. Notably, the same page treats source traceability and audit features as part of the 6\u201312 month roadmap, so those capabilities should not be assumed to be fully mature today.\u00a0(2)<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"implications-for-enterprise-knowledge-management\">Implications for enterprise knowledge management<\/h2>\n\n\n\n<p class=\"has-medium-font-size\">The most useful enterprise reading is pragmatic. For\u00a0<strong>personal work<\/strong>\u00a0and\u00a0<strong>slow-moving research<\/strong>, LLM Wiki is already compelling because it converts repeated synthesis into durable pages, reduces repeated rediscovery, and fits naturally with Markdown, links, local Git, and agentic editing. For\u00a0<strong>team use<\/strong>, it becomes attractive when humans can review updates and when work benefits from persistent thematic pages rather than ephemeral chat results. For\u00a0<strong>large enterprise deployments<\/strong>, however, the minimum bar rises sharply: provenance, access control, formal retention and deletion rules, snapshotting, rollback, approval queues, and clear separation between official knowledge and AI-derived interpretation become indispensable. That is exactly why the Mindware page treats the wiki as a middle layer rather than the final source of truth. The cleanest conceptual summary is Karpathy\u2019s compilation layer first, then structural discovery above it, then governed enterprise use around both.\u00a0(1)<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"comparison-table\">Comparison table<\/h3>\n\n\n\n<p class=\"has-medium-font-size\">The table below synthesizes Karpathy\u2019s Gist, official documentation for RAG,\u00a0GraphRAG,\u00a0NotebookLM, projects in\u00a0ChatGPT, and the Mindware investor page. The \u201cbest-fit scale\u201d and parts of the governance row are analytical synthesis rather than direct one-line claims from any single source.\u00a0(3)<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th class=\"has-text-align-left\" data-align=\"left\">Dimension<\/th><th class=\"has-text-align-left\" data-align=\"left\">RAG<\/th><th class=\"has-text-align-left\" data-align=\"left\">LLM Wiki<\/th><th class=\"has-text-align-left\" data-align=\"left\">Self-Organizing Wiki<\/th><\/tr><\/thead><tbody><tr><td>Primary timing<\/td><td>Query-time retrieval<\/td><td>Ingest-time compilation, query-time reading of compiled pages<\/td><td>Ingest-time compilation plus post-compilation concept modeling<\/td><\/tr><tr><td>Main artifact<\/td><td>Retrieved passages and grounded answer<\/td><td>Persistent Markdown wiki, index, log, derivative answer pages<\/td><td>LLM Wiki plus concept maps, associative trails, exploration interface<\/td><\/tr><tr><td>Source of truth<\/td><td>External knowledge base \/ original documents<\/td><td>Original sources remain authoritative; wiki is maintained derivative layer<\/td><td>Original enterprise sources explicitly remain the source of truth<\/td><\/tr><tr><td>Traceability<\/td><td>Often strong at passage level if citations are exposed<\/td><td>Varies by implementation; strongest when claim\/range provenance is captured<\/td><td>Intended to preserve distinction between sources and AI-generated interpretation<\/td><\/tr><tr><td>Freshness<\/td><td>Usually strong if the index is updated<\/td><td>Depends on re-ingest cadence and maintenance discipline<\/td><td>Depends on source refresh plus concept-layer refresh<\/td><\/tr><tr><td>Scalability<\/td><td>Retrieval infrastructure is mature<\/td><td>Needs search, ranking, and governance once the corpus grows<\/td><td>Adds another abstraction layer, so governance complexity rises further<\/td><\/tr><tr><td>Governance profile<\/td><td>Familiar for enterprise search and grounded QA<\/td><td>Requires added review, audit, rollback, and permission controls<\/td><td>Explicitly designed around layered truth vs interpretation, but current public source is still roadmap-level<\/td><\/tr><tr><td>Best fit<\/td><td>FAQ, grounded QA, rapidly changing corpora<\/td><td>Personal knowledge, research, expert-team synthesis, code\/document compilers<\/td><td>Enterprise KM, strategic research, meeting\/customer intelligence, associative discovery<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"references\">References<\/h3>\n\n\n\n<p><strong>Primary and official sources<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>[Karpathy&#8217;sGitHub Gist\u300cLLM Wiki\u300d](https:\/\/gist.github.com\/karpathy\/442a6bf555914893e9891c11519de94f?utm_source=chatgpt.com)<\/li>\n\n\n\n<li>[Karpathy&#8217;sX message\u300cLLM Knowledge Bases\u300d](https:\/\/x.com\/karpathy\/status\/2039805659525644595?utm_source=chatgpt.com)<\/li>\n\n\n\n<li>[Mindware Research Institute page for investors](https:\/\/www.mindware-jp.com\/en\/for-investors\/)<\/li>\n\n\n\n<li>[IBM\u306eRAG review](https:\/\/www.ibm.com\/docs\/en\/watsonx\/saas?topic=solutions-retrieval-augmented-generation)<\/li>\n\n\n\n<li>[Google Cloud\u306eRAG review](https:\/\/cloud.google.com\/use-cases\/retrieval-augmented-generation)<\/li>\n\n\n\n<li>[GraphRAG official document](https:\/\/microsoft.github.io\/graphrag\/?utm_source=chatgpt.com)<\/li>\n\n\n\n<li>[Microsoft Research&#8217;s GraphRAG Blog](https:\/\/www.microsoft.com\/en-us\/research\/blog\/graphrag-unlocking-llm-discovery-on-narrative-private-data\/?utm_source=chatgpt.com)<\/li>\n\n\n\n<li>[NotebookLM official help\u300cUse chat in NotebookLM\u300d](https:\/\/support.google.com\/notebooklm\/answer\/16179559?hl=en)<\/li>\n\n\n\n<li>[ChatGPT Projects official help](https:\/\/help.openai.com\/en\/articles\/10169521-projects-in-chatgpt)<\/li>\n\n\n\n<li>[Obsidian official help\u300cImport Markdown files\u300d](https:\/\/obsidian.md\/help\/import\/markdown)<\/li>\n\n\n\n<li>[Obsidian official help]\u300cGraph view\u300d](https:\/\/obsidian.md\/help\/plugins\/graph?utm_source=chatgpt.com)<\/li>\n\n\n\n<li>[Obsidian official help\u300cWeb Clipper\u300d](https:\/\/obsidian.md\/help\/web-clipper?utm_source=chatgpt.com)<\/li>\n\n\n\n<li>[Claude Code officila document](https:\/\/code.claude.com\/docs\/en\/overview?utm_source=chatgpt.com)<\/li>\n\n\n\n<li>[Codex official document](https:\/\/developers.openai.com\/codex?utm_source=chatgpt.com)<\/li>\n\n\n\n<li>[CacheZero repository](https:\/\/github.com\/swarajbachu\/cachezero?utm_source=chatgpt.com)<\/li>\n\n\n\n<li>[twillmrepository](https:\/\/github.com\/Jermolene\/twillm?utm_source=chatgpt.com)<\/li>\n\n\n\n<li>[llmwiki repository](https:\/\/github.com\/lucasastorian\/llmwiki?utm_source=chatgpt.com)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"unresolved-questions-and-assumptions\">Unresolved questions and assumptions<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Direct source gap:<\/strong>\u00a0Karpathy\u2019s Gist defines the pattern, but it does\u00a0<strong>not<\/strong>\u00a0standardize parsing, normalization, chunking, source-range citation capture, or merge\/conflict logic; those details in this report are therefore marked as\u00a0<strong>inference<\/strong>\u00a0when derived from community implementations rather than from the Gist itself.\u00a0(1)<\/li>\n\n\n\n<li><strong>Direct source gap:<\/strong>\u00a0No primary source in this review provides a controlled benchmark showing that LLM Wiki broadly outperforms RAG or GraphRAG across accuracy, cost, freshness, and governance. Any claim that LLM Wiki \u201creplaces\u201d RAG would therefore go beyond the evidence currently available.\u00a0(1)<\/li>\n\n\n\n<li><strong>Roadmap caveat:<\/strong>\u00a0The Mindware investor page is the primary public source for Self-Organizing Wiki, but it is clearly an investor\/roadmap page. Statements about market direction and planned features can be cited directly; statements about shipped enterprise maturity should be treated cautiously.\u00a0(2)<\/li>\n\n\n\n<li><strong>Terminology dispute:<\/strong>\u00a0The debate over whether a Markdown-based, agent-maintained corpus should be called a \u201cwiki\u201d is real in public discussion, but it is not yet settled by any authoritative technical standard. That naming debate matters mostly for expectation-setting around collaboration, editorial control, and auditability.\u00a0(1)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"summary\">Summary<\/h3>\n\n\n\n<p class=\"has-medium-font-size\">The deepest significance of LLM Wiki is not that it makes retrieval disappear. It is that it treats knowledge work as&nbsp;<strong>compilation and maintenance<\/strong>, not just search. Karpathy\u2019s Gist gives a minimal but powerful public pattern for that shift. The investor page from&nbsp;pushes the idea into a more enterprise-oriented direction by separating original sources from AI interpretation and then adding conceptual structure discovery on top. If that roadmap holds, the next frontier in enterprise knowledge systems will not be \u201cbetter answers\u201d alone, but better&nbsp;<strong>knowledge layers<\/strong>: what is original, what is compiled, what is inferred, what is approved, and how all four stay traceable over time.&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Executive summary Andrej Karpathy\u2019s public GitHub Gist, published on April 4, 2026, describes\u00a0LLM Wiki\u00a0not as a finished product, but as an \u201cidea file\u201d for agentic knowledge work: instead of re-retrieving raw fragments on every question, an LLM incrementally compiles curated&hellip;<\/p>\n","protected":false},"author":1,"featured_media":2102,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13,3,9,59],"tags":[],"class_list":["post-2100","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-knowledgegraph","category-llm","category-rag","category-trende"],"_links":{"self":[{"href":"https:\/\/www.aicritique.org\/us\/wp-json\/wp\/v2\/posts\/2100","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.aicritique.org\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aicritique.org\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aicritique.org\/us\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aicritique.org\/us\/wp-json\/wp\/v2\/comments?post=2100"}],"version-history":[{"count":1,"href":"https:\/\/www.aicritique.org\/us\/wp-json\/wp\/v2\/posts\/2100\/revisions"}],"predecessor-version":[{"id":2103,"href":"https:\/\/www.aicritique.org\/us\/wp-json\/wp\/v2\/posts\/2100\/revisions\/2103"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aicritique.org\/us\/wp-json\/wp\/v2\/media\/2102"}],"wp:attachment":[{"href":"https:\/\/www.aicritique.org\/us\/wp-json\/wp\/v2\/media?parent=2100"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aicritique.org\/us\/wp-json\/wp\/v2\/categories?post=2100"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aicritique.org\/us\/wp-json\/wp\/v2\/tags?post=2100"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}