Better Agents of Our Nature
GOAL OF THIS DOCUMENT
Reportage [to both our attendees and our global community] on the key takeaways and events of the day from 2026's Better Agents of our Nature Summit.
If you were in the room with us that day: then you know the importance of moving swiftly with the experiences we shared and the new knowledge gained. This is why we gather you and break down silos across borders, industries, and roles. This day could not have happened without you. Thank you.
If you were not in the room with us that day: this readout is a summary of exercise, takeaways, and spirit of the day. We welcome your thoughts and questions, and would encourage you to register for other Mission Control community events: including next year's Summit.
CONTEXT
Sixty-five senior AI leaders [from defence, intelligence, energy, financial services, manufacturing, technology, and policy] gathered at Jesus College for the third Cambridge Summit. Running since 2023, this is Mission Control's fourth Summit gathering. We're honored to convene this growing community of leaders and to continue to work so closely with our colleagues at The Intellectual Forum at Jesus College to produce such a special day.
The Summit was split into a morning and afternoon session.
The morning was structured working sessions in cross-disciplinary teams, exploring unique questions in each working group.
And in the afternoon, we played a simulation exercise designed by Mission Control bespoke for the Summit. This simulation, eponymously titled The Better Agents of Our Nature, is a live decision-making simulation exercise that compresses the 2026 to 2033 AI timeline into 5 minute bursts: challenging the room with fifteen 'Pick A' or 'Pick B' decisions. It was a forcing function of the tradeoffs that the group spent the morning discussing.
The Summit was conducted in accordance with the Chatham House Rule. While we cannot ascribe attribution, we are pleased to report out to you here the compelling conversations that took place.
STARTING AT THE END
Two hours. Fifteen decision rounds across three acts. Fast.
Participants at cross-disciplinary tables voted individually [mobile device, binary choices, no deliberation by design] across six domains: corporate adoption, employment, political governance, military and strategic affairs, public discourse, and consumer adoption.
Each round's aggregate vote shifted a shared world-state score, which determined which variant of the next scenario the room encountered. The simulation responded to its own participants. Your choices changed the world you were choosing in.
In the third act, round duration compressed from five minutes to one. Decisions arrived faster than deliberation could accommodate.
That was the point [and it is the point: this is the condition that leaders navigating AI governance face daily, reproduced in a room where the consequences were shared and visible]. The exercise surfaced assumptions that slower discussion leaves unexamined.
It revealed what participants prioritized when the stakes were high and the clock was short. The individual voting mechanic prevented groupthink. The aggregate outcome created collective accountability. You owned your vote. You lived with everyone else's.
Participants described the experience in one phrase that needs no elaboration: the shift from discussing governance to practicing it. ↣
WHAT WAS LEARNED FROM THE SUMMIT
Everything around AI [measurement, oversight, legal infrastructure, the human beings inside the organizations deploying it] is moving too slowly.
The imperative for autonomous systems in critical domains is clear. Every group was composed of practitioners and leaders from different specialties and industries and regions. And it was commonly reflected that both the need for [and immediate promise of] highly autonomous AI systems is real. What is lagging is not the value signal or the timing of the moment. It was all the human parts. Which ought come as no surprise to anyone working either in Frontier AI or in the industries being transformed by it: the rate-limitter to progress is human institutional inertia.
The gap between capability and governance is widening. The costs are compounding. And they are denominated in things that matter: lives, institutional legitimacy, national security.
THE PACE IS THE PROBLEM: BUT NOT THE ONE YOU THINK
In the last few years [even since the first time Mission Control convened this Summit], the pace of everything has changed.
Competition has changed. Internal adoption has changed. External adoption has changed. Innovation itself has changed. The volume of information confronting any single decision-maker now exceeds unaided human processing. Organizations are adopting AI not because they have a strategy, but because they believe they cannot afford to be seen without one or be caught standing still. So what is the natural outcome? Reactive adoption at scale: without clear use cases, without governance frameworks, without any credible measurement of return. If this is not addressed, it will be what undoes much of the commercial and national security enterprise value of AI. Worth noting: the etymology of governance lay in the notion of steersmanship: of having course and staying on course. It is easy to talk about AI without Capital-G Governance [a lack of controls, monitoring, evaluation: the litany of usual interventions]. The harder conversation is about a lack of Lowecase-G governance. A lack of aim. A lack of steering. If there is something to threaten AI or for AI to be a threat [both in terms of its growth, and its potential to drive transformational positive impact in the world]: organizations' Governance failures will hurt, but their governance failure will be fatal.
What has broken under this pressure is extensive. Incentive structures reward adoption volume over adoption quality. Governance mechanisms designed for deterministic software cannot accommodate probabilistic, generative systems [let alone those that act autonomously, have a sense of agency in the world, or can use software and legacy infrastructure to accomplish real work!] Upskilling efforts lag the tools they were built to support. It looks decreasingly likely that workers are excited to adopt a Co-anything. And the human cost is real and specific: weekends that were previously free, the quiet conviction that your organization has already fallen irretrievably behind, the compulsion to use AI simply to keep pace with a flow of information no person can process alone. Each of these were raised.
The prior generation of AI trust tooling [think: RMFs, asset inventories, risk assessments designed for known deployments] cannot hold. Agentic AI is accessible to every employee: increasingly embedded in the tools they use, or at home on their other laptop [Shadow AI is real, painful, and hard to find]. Emergent behaviors outrun the taxonomies built to contain them. Privacy requirements constrain the very research access needed to evaluate and improve these systems.
Once upon a time, it was considered an interesting line of inquiry to discuss whether or not sufficiently advanced AI would reshape institutions, economies, and the exercise of power. We have the data now. It's abundantly clear that the answer is "yes, and faster than we thought." Now, a new question emerges: whether the people responsible for those things will build the infrastructure [of measurement, of governance, of deliberate human judgment and institutional capacity] fast enough to direct that reshaping, Failure here means that they will, instead, be subjected to it. The window for deciding paths is not infinite. And the gap between capability and steersmanship will widen in the coming months. Closing it is less an academic exercise than the operational priority of the next decade. ↣
II. THE GAP BETWEEN WHAT IS SAID AND WHAT IS KNOWN
There is a sharp divide between what organizations say about AI and what they know about AI.
C-suite leadership champions wholesale adoption. Risk officers, General Counsel, and revenue-facing functions quietly absorb the exposure. One working group during the Summit named it precisely: a gap between euphoria and fear. The executive who fails to trumpet AI transformation risks irrelevance. The downstream consequences [legal exposure, quality degradation, unquantified risk] fall to people with less institutional voice.
This is not a stable arrangement. Decisions are being deferred across every dimension that matters.
Hire people or integrate tools?
Commit to a regulatory approach when jurisdictional questions remain open and regulation is being stress-tested by capabilities it never anticipated?
Define work through deterministic statements of work or flexible statements of objectives [and what does each approach cost when the underlying technology shifts every quarter?]
The political environment [particularly in the United States] has made it harder to even discuss topics adjacent to AI Governance: inclusion, equity, institutional values. They are load-bearing concerns and they are becoming unspeakable in precisely the rooms where they most need to be spoken. By proxy: so too does AI Governance. Exactly at the time in which it matters the most.
No working group reported a credible, widely adopted framework for measuring AI ROI. Not one. Risk quantification is similarly underdeveloped. Worse, boards are not asking the questions that matter: whether alternatives to dominant vendors exist, what workforce reduction does to the long-term talent pipeline, how AI-driven efficiency reshapes pricing and procurement, or what happens after the first-order productivity gains are captured [and they will be captured quickly, and then what?]. This absence of measurement is a structural vulnerability in every organization that has committed resources to AI without a means of knowing whether those resources are working.
One participant put it in terms that recurred for the rest of the day: truth is becoming a commodity.
Access to higher-capability models is stratified by ability to pay. Organizations and populations with fewer resources are at risk of increasingly relying on lower-quality AI systems. Follow this to its conclusion: the accuracy of the tools available to you will depend on what you can afford. We should carefully measure whether or not we are building a world in which the quality of truth is priced like everything else ↣
III. WEAPONISATION IS PRESENT TENSE
Weaponisation is not a future risk. It is present tense. Generative AI is already being deployed as an instrument of political communication, law enforcement surveillance, and information manipulation — by state actors and private entities alike, across multiple jurisdictions. Counter-surveillance tools are proliferating in response. The cycle is accelerating. One participant's summary: "Things are getting cyberpunk very quickly." They were not exaggerating.
The question worth asking is not whether this is happening [it is]. The question is what needs to be built to meet it.
The governance frameworks that treat misuse as hypothetical are already obsolete. What replaces them? The detection infrastructure for manipulated media remains fragmented and underfunded. Better mousetraps, smarter mice. The legal frameworks for real-time biometric identification in public spaces are unsettled in nearly every jurisdiction. The norms around state use of generative AI for public communications do not exist.
Each of these is a gap. Each is also an opportunity. These opportunities will be seized by the organizations willing to do the work of filling them before the defaults are set by whoever moves first.
Who worries about AI and who does not maps cleanly to proximity to consequence. Those least concerned have the greatest distance from its effects: assumed confidence that disruption won't reach them, more immediately pressing material concerns, or commercial interests aligned with acceleration [it is easy to love the wave when you own the surfboard]. According to attendees, those most engaged with the risks: trust and safety professionals [many of whom are being made redundant at exactly the moment their work matters most], educators, environmentalists, women [a demographic repeatedly identified across working groups as disproportionately exposed to AI-driven disruption], SaaS providers navigating fundamental shifts to their business models, and governments building capacity to regulate capabilities that evolve faster than legislation.
Beneath the specifics, a harder question surfaced across every working group: where is humanity in all of this?
The focus on capability has outpaced attention to values: not as abstraction but as engineering problem. The braking mechanisms [civil liability, cultural norms, consumer expectations] have not kept pace. The expansion of algorithmic management from gig workers to white-collar employees may prove to be an inflection point: there is likely a threshold at which workers broadly reject the experience of being algorithmically monitored, managed, and measured.
Whether that threshold arrives before the systems are normalized is an open question. Whether it produces something constructive [new norms, new contracts, new design requirements] rather than mere resistance depends on whether organizations treat that signal as input or obstacle.
Predictably: trust and transparency ran through every discussion. How do AI systems make decisions? What data do they rely on? Who qualifies as an expert when the field evolves faster than credentials can track?
Yet one group raised an observation worth sitting with: lying itself is culturally and linguistically constructed. Dishonesty as understood in English differs from how it is understood in German. Machine-generated falsehood is neither. It is a third category. It is novel and unmoored from the social contracts that make human deception legible.
We do not yet have the vocabulary for it. Building that vocabulary [and the detection, attribution, and accountability infrastructure around it] is among the most consequential near-term opportunities in the field. Where so many of us have used 'AI Trust' metaphorically, we may have to start using it literally. ↣
IV. THE PEOPLE WHO WILL NAVIGATE THIS
When asked who is best positioned to lead through this transition, the room moved away from titles and toward qualities.
Growth mindset.
Comfort with ambiguity.
The ability to build trust across functions.
Willingness to experiment without requiring certainty.
Contextual sensitivity [the capacity to think through second- and third-order effects, not just the immediate win].
The premium on these classic virtues has never been higher.
And perhaps above all: the ability to create productive friction. Not the removal of obstacles: the opposite! The knowledge of where to introduce deliberate pauses, forcing functions, and checkpoints that improve the quality of adoption without paralyzing it. This is the quality that distinguished the conversation from standard innovation rhetoric. The room was looking for people who know where the path needs a gate.
A generational inversion was raised with some force. Senior leaders may carry decades of domain experience but often have less than a year of meaningful engagement with AI tools. Front-line workers [in sales, engineering, development] are frequently the most fluent users, with the strongest practical intuition for what works and what does not. The expertise pyramid needs to invert, or at minimum run in both directions.
The implication is uncomfortable but clear: in many organizations, the people making strategic decisions about AI are the least qualified to make them, and the people most qualified have the least institutional authority to do so.
Inter-institutional coordination is harder still. It works when it is both necessary and friction-reducing [shared production inputs, cross-sector transactions, mutual dependency]. It fails in domains that require planning, prevention, and shared accountability: precisely the domains most relevant to AI governance. A structural incentive problem compounds this: organizations are disincentivized from sharing failures [no one publishes the post-mortem on the AI deployment that quietly cost them six months and a client relationship]. Without that signal, the same mistakes propagate across industries.
As mentioned prior, an operating philosophy that captured the room came from the special operations community: slow is smooth, and smooth is fast.
Deliberate deceleration not as resistance to progress, but as its precondition. Shape the legal and regulatory landscape rather than waiting for courts to adjudicate incrementally. Institutionalize privacy redefinition rather than treating it as an afterthought. Treat societal pushback [consumer resistance to organizations that offload work onto customers while capturing efficiency gains for themselves] as a design input, not an obstacle to be managed. The AI investment bubble concentrates the mind here: the market will eventually demand evidence that the costs are justified. Organizations without rigorous measurement will be the most exposed. The ones that moved fastest without building the infrastructure to know whether speed was working will be the first to answer for it ↣
WHAT THE ROOM CONCLUDED
Governance must accelerate. The deficit is measurement, oversight, and human infrastructure. The gap is widening and the costs are compounding [and they compound in the currencies that matter most: legal exposure, institutional legitimacy, and the trust of the people inside these organizations].
ROI measurement is the critical missing infrastructure. Without a credible framework, organizations cannot distinguish productive adoption from performative adoption, and boards cannot ask the right questions [or even know which questions to ask]. This is a strategic vulnerability.
Weaponisation is not a future risk. It is a present condition. Governance frameworks that treat misuse as hypothetical are already obsolete [and the organizations still building for a world in which misuse is theoretical are building for a world that no longer exists].
The expertise pyramid is inverting. Front-line practitioners often have more relevant AI fluency than the senior leaders making adoption decisions. Organizations that don't build bidirectional channels will make poorer choices [and will not know they are making them until the consequences arrive].
Productive friction is a design requirement. The most effective approach is not to remove all obstacles but to be deliberate about where friction is introduced ↣