# Knowledge System and SWEL ## Progressive Deepening The knowledge system implements a progressive deepening strategy to manage context window economics: 1. **Hot Memory Check** — cached summaries keyed by topic. If a recent synthesis exists, return it immediately. 2. **Metadata Downselect** — load file metadata only (names, types, sizes, descriptions). The LLM acts as a relevance filter, selecting which files are worth reading in full. 3. **Full Content Analysis** — load only the selected files. The LLM extracts and synthesizes relevant information. 4. **Hot Memory Update** — cache the synthesis for future queries on the same topic. This avoids loading entire document collections into context. The LLM is the relevance filter, not a brute-force retrieval engine. ## Knowledge Graph Construction The knowledge graph is created automatically during document ingestion via recursive semantic extraction: - The first document is extracted in isolation - The second document is extracted in the context of the first - The nth document triggers a recasting of all relationships across all prior documents - The system enforces sparse connectivity rather than redundant linkage This produces a dynamical network of semantic relationships that forms the basis for hierarchical retrieval. No manual ontology design or knowledge engineering is required from the customer. ## The SWEL (Scalable Work Executable Library) When a synthetic worker solves a problem, its codified solution becomes part of the Scalable Work Executable Library. This forms the basis for enterprise-wide, evolving procedural memory. ### How It Works 1. A worker encounters a problem that cannot be solved with existing MBUs 2. It uses the Custom Code MBU to generate and execute Python code 3. The code is parameterized, documented, and added to the SWEL 4. Other workers can discover and use it via search-first retrieval 5. Workers can also evolve variations of existing solutions ### Search-First Architecture Before generating any new code, the synthetic worker searches the existing library. This eliminates redundant computation: one worker's solution becomes available to every worker. ### The Flywheel Each subsequent unit of knowledge becomes more valuable in the presence of all others: - The solution itself - The ability for other workers to use it - The ability for other workers to compose it with existing solutions - The ability for other workers to evolve variations - Metadata about when and how it works best - Parameterized method inputs and expected outputs - Human editability and intervention points Early in a deployment, most problems require generating new code. As the SWEL grows, more problems are solved by discovering and composing existing solutions. The system gets faster and more reliable — and cheaper to run — through accumulated executable knowledge. This has a structural consequence: the SWEL reduces inference consumption over time. Solved problems stay solved. The second time a worker encounters a familiar problem, it retrieves the existing solution instead of generating a new one. The system converges toward fewer model calls per unit of work accomplished. This is the opposite of how API-consumption business models work, and it is one reason why the frontier labs — whose revenue depends on maximizing inference volume — are structurally unable to build this kind of system. You do not build the machine that shrinks your own meter. ### Quality Control Every solution in the SWEL includes performance metadata: success rate, average execution time, total executions, failure modes, fork history. Workers use this metadata to decide whether to use, evolve, or replace solutions. Low-performing code is naturally selected out of active use. --- *For the interactive visual walkthrough: https://usemissioncontrol.com/platform/#architecture-codegen*