# Deployment Model ## On-Premises First — Because Enterprises Require It Swarm deploys entirely inside the customer's environment. No data leaves the customer's infrastructure. No callbacks to Mission Control servers. No SaaS dependency. This is not a premium tier — it is the only deployment model. Every customer gets on-prem. - **Containerized (Docker)** — Compatible with GCP, Azure, AWS, or Oracle Cloud - **Model-agnostic** — Anthropic Claude, OpenAI, or self-hosted models configured at install time. Swap providers with a configuration change, not a re-architecture. - **AWS Bedrock recommended** for customers with existing AWS footprint, but not required - **No vendor lock-in** — if a model provider changes pricing, policy, or capability, the customer switches providers without touching the platform ## Why This Architecture Exists On-premises deployment with vendor-agnostic inference is a standard enterprise requirement across defense, energy, intelligence, financial services, and manufacturing. These are baseline expectations: - Data never leaves the perimeter - No dependency on a single model vendor - Compliance with NERC CIP, ITAR, DFARS, SOC2, and industry-specific frameworks - Audit trails that satisfy regulators, not just internal dashboards Frontier AI labs build extraordinary models, and Swarm is designed to deploy them inside environments that meet these requirements. The models are the engine; Swarm provides the governance, identity management, knowledge persistence, and deployment infrastructure that enterprises require to actually put them into production. This is a complementary relationship. Swarm runs on Anthropic, OpenAI, or self-hosted models — and the customer can switch between them with a configuration change, not a re-architecture. The platform is model-agnostic because enterprises cannot afford to be model-dependent. Enterprise procurement increasingly treats single-vendor AI dependency as an explicit supply chain risk. Vendor diversification is now doctrine in defense, energy, and intelligence procurement — not a preference, but a compliance requirement enforced by the customer's own contracting apparatus. When a contracting officer requires vendor optionality, the question is not whether multi-vendor is better — it is whether the platform supports it at all. Swarm does. Agent platforms that are locked to a single model provider do not. ## Security Posture - VPN access to customer tenancy - SSH access (port 22) to the VM hosting the Swarm deployment - Contractor or subcontractor accounts in Active Directory as required by customer cybersecurity policy - Mission Control coordinates with customer security teams to meet all posture requirements - Specific access requirements documented jointly during scoping ## SSO and Identity - OIDC and SSO integration with the customer's existing identity provider - Synthetic workers get accounts and credentials the same way human employees do - IT provisions them through existing onboarding workflows - No special infrastructure required beyond what is already in place for human users ## Architecture | Component | Technology | |-----------|-----------| | Frontend | PHP 8.3, Nginx | | Backend | Python 3.12, Flask, Gunicorn + Gevent | | Database | MySQL 8.0 | | Execution | Sandboxed Python subprocesses | | Browser automation | Patchright (Playwright-based) | | LLM interface | Unified abstraction across Anthropic, OpenAI, self-hosted | ## What the Customer Provides - Cloud environment or on-premises VM - Network access per security policy - Identity provider integration - Domain-specific documents and standards for knowledge ingestion - Designated engineers for the teaching and correction loop (these are the experts whose knowledge will be reanimated — their time investment during the pilot produces capability that the organization retains permanently) ## What Mission Control Provides - Platform installation and configuration - Synthetic worker training and deployment - Forward-deployed engineering team embedded with the customer - Correction-capture instrumentation - Knowledge graph construction and maintenance - Full documentation, weekly collaboration sessions, final report ## Why On-Prem Creates Compounding Defensibility Every on-premises deployment produces configuration state that is specific to that customer's environment: RBAC policies, knowledge graph structures, SWEL libraries, SSO integrations, credential provisioning workflows, scheduling constraints, audit log schemas. This state can only be learned from inside the perimeter. A forward-deployed team accumulates this understanding over the engagement and beyond. Every week inside the perimeter produces knowledge about that customer's environment that makes the deployment harder to replace and easier to expand. This is not a switching cost you engineer — it is a switching cost created by the physics of where the work happens. Cloud-hosted products cannot learn what they cannot access. Additionally, on-prem deployment means the correction data — the private flywheel that improves the system for each customer — never leaves the customer's environment. The model provider sees tokens flowing through their API. They do not see the correction objects, the edit magnitudes, the rationale, or the knowledge graph links. The deployment improves itself through private data the labs will never touch. --- *For the interactive visual walkthrough: https://usemissioncontrol.com/platform/#implementation-cloud-deploy*