# Security Model — The Security Inversion ## The Problem With Every Other Approach The dominant model in agentic AI is: trust the LLM's output and execute it. Give the agent tools, let it decide what to do, and hope the guardrails hold. This model fails in production because: - LLMs hallucinate capabilities they don't have - Prompt injection can redirect agent behavior - Tool-use APIs give agents capabilities the operator may not have intended - There is no hard boundary between "the agent decided to do X" and "the agent was tricked into doing X" ## Mission Control's Inversion Allow nothing. Enable specific things. A synthetic worker cannot do anything that is not a pre-approved, human-authored capability. The LLM does not decide what tools exist — the administrator does. The LLM does not decide what systems to access — the RBAC policy does. The LLM does not decide what code to run — the package whitelist does. This is not a prompt instruction. It is a runtime constraint enforced at the interpreter level. ## What This Means in Practice - A synthetic worker configured with only `knowledge`, `reason`, and `document_generation` MBUs literally cannot send email, browse the web, or execute code. Those capabilities do not exist in its runtime. - A synthetic worker with `code_generate_new` can generate Python, but the execution sandbox blocks `os`, `subprocess`, `sys`, and any package not on the whitelist. The LLM cannot circumvent this — it is not a prompt rule. - A synthetic worker cannot delegate to another worker with more permissions than it has. Permission escalation is architecturally impossible. - Every LLM call, every MBU execution, every file access is logged with full provenance. There is no "the agent did something and we don't know why." ## SOC2 Compliance Mission Control maintains SOC2 Type II compliance via Drata with continuous monitoring. Controls cover: - Access management and authentication - Data handling and encryption - Change management and deployment - Incident response and monitoring - Vendor risk management ## The Security Conversation When a customer asks "how do I trust an AI agent with access to my production systems?" the answer is not "the model is really good" or "we have a system prompt that tells it to be careful." The answer is: the agent operates inside a governance layer that makes unauthorized actions architecturally impossible. Nine firewalls. Runtime enforcement. Full audit. No arbitrary execution. The question is not whether you trust the model. The question is whether you trust the box the model operates inside. And the box is built for exactly this. --- *For the interactive visual walkthrough: https://usemissioncontrol.com/platform/#architecture-no-arbitrary-exec*