Autonomous machine intelligence is in the midst of crossing a unique threshold in 2025. Systems have left the labs and are increasingly integrated into critical operations in the world around us.
At Mission Control we see this every day; in the partners we support with our technology in infrastructure, advanced manufacturing, energy, and national security. To our partners, AI has become practical.
These systems are making decisions, operating independently, and executing work that used to require human judgment.
You're living this.
In fact, there's a good chance that you're deploying these systems, governing them, building them, or deciding what they should and shouldn't do.
And that last piece is the hard part.
The cultural moment around AI has shifted in the past few years. And a new discourse is emerging. While we ought not become unmoored to the underlying virtues that guide us as a society, it's become clear that the governance frameworks and attitudes imagined in 2022 and 2023 aren't sufficient to address the realities of 2025 and beyond.
Every day that reality becomes more real. As such, our discourse about the values we seek to animate with this transformation must evolve as well. There is no other choice.
This summit brings together ~70 people who understand what that means. Two days to work through the hard questions with peers who are wrestling with the same problems you are.
This isn't a conference. There will be no presentations. No pitches. No panels where you sit and listen.
On February 20th, you'll spend the day in small, rotating teams working through structured scenario simulations. Real problems. Hard decisions. You will work closely with new and old colleagues and friends to wrestle with the questions that matter most, drawing on your operational experience and strategic judgment.
This event is conducted under the Chatham House Rule throughout. Use the ideas, don't attribute them. This enables honest conversation that's impossible in most professional settings.
2025 was the "Year of the Agent". Autonomous systems moved from experimental to operational at scale. Commercial organizations deployed production-grade autonomous workers. Government workforce reductions made "do more with less" non-optional. And the gap between technical capability and governance continues to widen as the technology accelerates.
As such, 2026 is a good time to set norms. The systems are live. The deployments are happening. The decisions we make about how autonomous intelligence operates in critical contexts: those decisions are being made right now, by people like you.
And that means we have a unique opportunity to do 2 things:
1) Envision the virtuous benefits we wish to see from this technology
2) Evolve our fundamental discourse and beliefs about AI governance to match the current capabilities and their accelerating landscape
That's why we are gathering.
We're honored to gather ~70 senior leaders who are actually building, deploying, and governing autonomous AI systems in high-stakes environments.
CIOs and operational executives from energy, manufacturing, logistics, financial services. People running systems where failure isn't an option. They understand what it means to operate 24/7/365 with zero tolerance for error.
People from companies building autonomous systems and the infrastructure they run on. They understand what's possible, what's hard, and what the technology actually needs to work at scale.
People responsible for deploying AI in national security contexts. Senior leadership from US, UK, and allied defense communities. These are the operators who understand that mistakes have existential consequences.
Researchers, regulators, and advisors working on frameworks for AI governance deployment. People who bridge technical capability and societal norms.
Mission Control AI builds infrastructure for autonomous synthetic workers in critical industries, like manufacturing, energy, defense, logistics. We deploy systems that operate for hours autonomously, handling judgment-based work in environments where failure has real consequences. We're based in San Francisco, CA.
We're operators. We understand what autonomous systems can do, what they can't do yet, and what they shouldn't do. And as a Public Benefit Corporation, we're highly attuned to the open questions about the virtues and values these systems embody and hold. We live with the questions this summit explores.
We started the Cambridge Summit in 2023 with our partners at The Intellectual Forum at Jesus College, Cambridge because we saw a gap in how people were talking about both autonomous AI and responsible AI. Too much "move fast and break things" from Silicon Valley. Too much "regulate everything" from Brussels. Not enough practical thinking from people who actually deploy these systems in contexts where mistakes cost lives or billions. So we seek to fix that.
The summit bridges that world and that gap. It's for operators, builders, and governors who want to do this work well and fast. Because we believe that care and acceleration go hand in hand if you don't want to explode.
February 20th, 2026
9:30 AM until 6:00 PM
Attendees are expected to be fully present for the entirety of the day. Breaks are provided throughout.
Catered lunch served. Coffee, tea, and refreshments provided throughout the day.
Approved invitation only. No cost to attend.
If lodging is required, this can be arranged on a first-come, first-served basis.
All sessions convened under the Chatham House Rule. Use the ideas, don't attribute them.
Produced by Mission Control AI PBC and The Intellectual Forum at Jesus College, Cambridge
Your operational experience is why you're here. Your strategic judgment. Your perspective on the problems we're working through. The systems you're deploying or governing or building. That's exactly what this conversation needs.
We're honored you'd spend a day working on this with us. We strongly encourage qualified applicants to request an invitation. While we cannot guarantee attendance to all who apply, we will review every request carefully and respond within 48 hours.