Imagine ChatGPT, as a colleague. Capable of not just answering questions and talking, but of accomplishing a wide variety of practical business tasks that require manipulating or working with your file system and the software you use every day to get your job done.
I’ll be the first to admit to you that I didn’t have either of these squares on my Bingo Card for July 2023:
[ ] Business Insider runs an article about Effective Accelerationism; highlighting how decreasingly fringe of a belief set it is amongst the technical elite.
[ ] Anthropic CEO testifies that LLMs and Biorisk are likely to converge in 2024-2026; semi-autonomous AI systems capable of synthesizing (or social engineering the synthesis of) engineered biological weapons is not a matter of if but when, and the state has only a few years to do anything about it.
I didn’t have either of these on my Bingo card not because they were out of my field of vision , but because they seemed too far outside the discourse.
My mistake. Here we are.
To its credit; OpenAI expanded both the upwing and downwing narratives around accelerating AI this spring with the release of GPT-4.
Where ChatGPT drove utility (easier to use, moderate model capability improvement), GPT-4 was so much more powerful in capabilities that its release stimulated the discourse substantially. Before its release, the White House Press Briefing room was lit up with polite dismissive chuckles when asked about AI Risk. After its release, the subsequent conversations were much more somber.
Senate hearings. White House gatherings of CEOs of AI companies. New laws globally. All of these are downstream of a reflexive shift in discourse: people are taking AI (and its acceleration) seriously. And taking it more seriously in turn further stimulates the discourse.
Likewise; twitter discourse turns into back channel chatter, into blog posts, into viral memes, and into political theory – and fast.
The Effective Accelerationism movement isn’t new, per se. The CCRU happened decades ago. Land wrote Meltdown in (checks dogged ear copy on bookshelf) 1994.
We sometimes forget how avant-garde the 90s really were.
So in /acc’s most recent political rebirth, venture backed founders in and around the AI space find purpose and place in a political philosophy of the unwavering normativity of accelerating technical progress at all costs. Their name a cheeky retort to the decidedly decelerationist Effective Altruism movement; the e/acc propose that we aren’t yet going fast enough. That to accelerate the positive feedback loops between technology and capital (especially computing technology and venture capital), we have the greatest shot at building flourishing civilizations that spread life and the light of consciousness to the stars.
Their opponents cry “death cult”. From the philosophy’s perspective it’s not clear that the life or consciousness that spreads to the heavens *needs* to be human life, per se (it may just as well be our AI creations).
Their most visible proponents (like Y Combinator president Garry Tan and VC Marc Andreesen) celebrate the political stance as not only viable but pragmatic. That a reflection on the impact of innovation over history lends one naturally towards the belief: these are the natural incentive structures of the economic games we play, and their outcomes are human progress that improve the lives of everyone.
The only option is faster, and faster faster. The stakes have never been higher, and our only way to a better brighter future is building.
So build faster, and let it rip.
We write Accelerate (and we founded Mission Control) to synthesize both these bingo squares into a cohesive vision of what future we could want between humans and AI. And make that future reality. In dedicating our time and thoughts and capital into building solutions and community that scale; we accelerate the AI transformation that aligns the incentives of humanity and technocapitalism.
Both Marc Andreesen and Dario Amodei of Anthropic are right.
The yin and yang between the incentives of humanity and technocapitalism means that we may get both – that we should want both: to accelerate a socio-technoeconomic transformation that reshapes the world around us; and that the transformation drives a world of greater flourishing and dignity for everyone.
We founded Mission Control on the premise of “move AI faster and break fewer things”. And more than a company motto, it’s a reflection of our core philosophy: that when dealing with the world’s most powerful technology, moving faster without detonating means safety critical engineering methodology, knowing and minimizing risk (while still moving forward and taking the risk), and guidance and engineering decisions that make companies, people, and institutions resilient to acceleration, not resistant to it.
This issue focuses on a perspective piece that sits halfway between e/acc and Dario’s senate comments.
Agentic AI is very close, and it will be one of the first real testbeds of what it means to live, play, and work next to increasingly self-sufficient machines.
As always, we appreciate feedback and comments. Drop me a line at ramsay@usemissioncontrol. I’ll be in LA most of the Summer, and Denmark and the UK in late September.
If you or your team are looking to win with AI you can trust, reach out and we’ll schedule a time to talk about turning your goals into reality.
CEO, Mission Control
1. “We can’t monitor or trust what our staff does with ChatGPT.”
2. “Most of our staff aren’t coders, but they want to use and customize Generative AI workflows.”
3. “We’re generally enthused about Generative AI, but we don’t specifically yet understand the business cases”
LLM Trust infrastructure is an essential part of the enterprise Generative AI strategy. Effective LLM Trust infrastructure solves 3 related challenges at once:
1. Real-time Data Loss Prevention.
2. Document Control for Retrieval Augmented Generation.
3. Model Traffic analytics and monitoring.
GenOps is Mission Control’s Trustworthy Generative AI platform. GenOps delivers the missing LLM Trust infrastructure that enables leading teams to “say yes” to Generative AI:
– prevent secrets from leaking to 3rd party systems like ChatGPT using real-time data loss prevention AI.
– control how your sensitive data is used for Retrieval Augmented Generation.
– monitor traffic and model usage in real-time.
In 2023, Frontier model and open source AI teams will make significant advances in building Agentic AI: Generative AI systems capable of not only understanding and using natural language, but also making their own decisions about courses of action to take to accomplish complex tasks.
In 2021, Agentic AI was theoretical. In 2022, they were a deep R&D concept. In 2023, Agentic AI projects topped the Most Popular lists on GitHub. In 2024, they’re your colleagues on MS Teams.
What happened? What’s happening?
In 2023 we realized that many types of goal-oriented, motivated behavior in digitized business contexts can be decomposed into relatively predictable and straightforward formulaic tasks.
Operating Microsoft Office doesn’t actually require sophisticated reinforcement learning.
Surprising to most: language alone appears to be a moderately effective way for an AI system to build its own complex routines of step-wise behavior. That includes goal planning, autonomous action execution, flexible decision making, and tight integration with embedded software operating environments that allow for the autonomous execution of software behaviors, functions, and commands.
Simply put: Agentic AI is ChatGPT that can “get things done” for you. This includes accomplishing complete business tasks on its own accord, solving its own problems as it goes, making its own plans to accomplish its own objectives based on the goals you set for it.
Understanding the implications of Agentic AI is paramount for leaders who want to make high ROI decisions amidst the chaos of the evolving digital landscape. Leaders must grasp these new dynamics to capitalize on opportunities, drive innovation, and maintain a competitive edge as decision-making becomes a shared responsibility with AI agents. Ignorance, in this context, will lead to costly missteps, lost opportunities, and the finding out what it really means to “be disrupted.”
Agentic AI will significantly reshape the landscape of human knowledge work without necessitating extensive corporate restructuring. Human language: email, messaging online; is the backbone of contemporary enterprise business. And your professional ecosystem as a knowledge worker is set to be revolutionized by this technology whether your job involves strategy formulation, decision-making, or creative tasks.
Agentic AI will affect how you carry out these functions because, by design, it will be alongside you accomplishing those tasks. The change will start gradually. We predict it to rapidly become transformative, altering the very framework of knowledge-based roles altogether.
For business leaders, the impact of Agentic AI will be determined by their incentive structures. In organizations that value innovation and cost-efficacy, Agentic AI will be a powerful catalyst; streamlining processes as human and software hybrid teams tackle tasks.
Agentic AI will contribute to the reshaping of the Future of Work; further endorsing and facilitating remote work where both AI and human agents exist digitally, leveling the playing field and enabling synergistic interactions.
Non-embodied AI presence, meaningfully driving forward business tasks autonomously, further erodes management’s incentives for large offices full of in-person staff. Agentic AI leads to novel work dynamics where remote work becomes even more prevalent and efficient, challenging our conventional understanding of office spaces, and the role of occupation in our lives and communities.
The repercussions of this transformative shift extend to the broader labor market. As job markets and employment structure itself are impacted by Agentic AI, it becomes crucial to reevaluate job roles, reskill workforces, and reconsider our social safety nets.
The world we create with LLMs that run themselves is both strongly incentivized, and completely alien.
This leap into the future brings with it a new layer of complexity regarding AI risks; the wider adoption of Agentic AI will change the face of AI Risk Management, expanding the threat landscape, and making it crucial to re-examine and revise our current frameworks of AI safety and security.
This imminent landscape represents a novel integration of artificial and human intelligence. And its near emergence will reshape our conventional understanding of knowledge work.
Right now, the faces you see in a typical video call are human.
The software tools you use operate based on straightforward logic, only responding to direct human input.
We are on the precipice of a shift heralded by the emergence of Agentic AI which will change these conventional dynamics significantly.
From the perspective of regulators, the rise of Agentic AI introduces the need to strike a delicate balance between fostering innovation and imposing regulations to protect civil society. Navigating this balance demands truly understanding Agentic AI and its potential impacts on business and society.
Policy makers need to understand how this technology operates, its implications for privacy, security, and society, and the potential loopholes that malicious actors might exploit. Here, foresight and proactivity are indispensable in creating effective regulations that ensure societal protection without stifling progress.
For the knowledge worker of 2024, Agentic AI will become a crucial part of the discourse on both AI and the nature of work itself.
AI capabilities (the set of tasks AI systems can perform with reasonable competency, fidelity, and accuracy) will continue to improve over 2023 and 2024. This will emerge as the result of high-fidelity datasets (especially instruction-tuned datasets) used to train both open and closed source models. This is happening in 2023 with increasing frequency and intensity: new models emerge constantly, each with more impressive capabilities than the last.
[Case and point; between the beginning of writing this essay (7AM), and the end of writing it (10PM; with breaks in between for some meetings, meals, and bike ride), a new fine-tuned LLM descendent of LLaMa2 has been released with 2x the token context window by an open source collective. It’s so performant that I’ll be able to run it on my MacBook tomorrow, we’ll incorporate it into our platform Monday.]
Jobs previously thought to be immune to the automation of synthetic intelligence will become increasingly vulnerable to the upward pressure of system capability and performance. And while there is an understandable apprehension, understanding this technology is the first step towards preparing for the future. It will be absolutely essential to adapt and upskill as we can – especially as AI carves new work niches.
Regardless of one’s role – leader, regulator, or laborer – understanding and engaging with the rise of Agentic AI is not optional; it’s a necessary part of surviving and thriving in the new digital landscape.
Ultimately, the impetus for Agentic AI lies within the objectives of management.
Business leaders are incentivized to enhance efficacy and efficiency. A primary way of achieving this is by incorporating technologies that can scale and augment worker capabilities.
This drive towards operational efficiency, coupled with competitive pressures, results in a sort of Game Theory scenario; firms will engage in an ‘arms race’ of innovation – as one business implements a transformative technology, its competitors are compelled to follow suit or risk being outpaced. This dynamic applies both vertically, within specific industry sectors, and horizontally, across different sectors, creating a universal and necessary push towards more AI-integrated workplaces.
The rise of Agentic AI is inevitable – fueled by management’s constant pursuit of efficiency, competitive pressures within and across industries, and the relentless march of the Digital Transformation. Understanding these motivations and processes provides crucial insights as to why we’re on the precipice of this significant shift and why the integration of Agentic AI in the workplace is more a question of ‘when’ than ‘if’.
At the proximal level, the emergence of Agentic AI is a natural consequence of the ongoing Digital Transformation and the progressive shift towards digital platforms and processes has gradually integrated AI into our daily workflows.
Agentic AI represents the next phase in this evolution where AI transitions from being a reactive tool to a proactive teammate that’s capable of autonomous decision-making and execution – “Agency”.
The rise of Agentic AI in the workplace is not a consequence of a single breakthrough but a fusion of multiple AI approaches.
At its core, Agentic AI merges large language models with a symbolic code “Shell” that allows it to interact with APIs, software platforms, and communications infrastructure, and a secondary “task-step focused” LLM execution path to plan and organize complex routines of behavior.
The symbolic “Shell” serves a practical purpose, acting as an interface layer that enables the Agent to execute tangible tasks. It’s through this Shell that Agentic AI can interact with other systems, making API calls to resources such as search engines, document repositories, and communication tools like Microsoft Teams, email, or Slack.
The Shell grants the Agent the capability to perform tasks such as retrieving information, analyzing data, or facilitating communication – actions that are crucial in a digital work environment.
Meanwhile, the secondary LLM pathway is dedicated to planning and reasoning. Just as humans use language to reason, strategize, and devise plans, this secondary LLM pathway uses linguistic reasoning to form step-by-step strategies to achieve specific goals.
It facilitates the generation of adaptive, complex behavior; enabling the AI to respond flexibly to changing circumstances and requirements. Often this is not a full second LLM as much as it is a ‘subroutine’ executed behind the scenes to determine the “inner-monologue” of an Agent as it plans its behavior.
The combination of these elements allows Agentic AI to operate in an adaptive and autonomous manner within the digital workspace – the symbolic shell provides ‘hands’ for the Agent to interact with digital systems, and the secondary LLM provides something like a ‘brainstem’ for devising and executing step-wise plans to accomplish its task. Together they equip Agentic AI with the tools it needs to perform as an effective and dynamic teammate in the modern workplace. This unique blend of technologies is what makes Agentic AI capable of not just understanding and generating language, but also making autonomous decisions and executing goal-oriented actions in a flexible and powerful manner.
While the opportunities brought about by Agentic AI are vast, it is absolutely crucial to acknowledge the potential risks and challenges associated with their implementation. These risks demand thoughtful and careful consideration and mitigation.
As the complexity of Agentic AI increases, understanding its inner workings, including decision-making processes, becomes increasingly challenging.
The intrinsic challenges of explaining the behavior of LLMs will be only amplified in Agentic AI. A single behavioral “step” of a task accomplished by an AI Agent may require dozens of decisions being made, each requiring the use of an LLM.
Furthermore, interrogating the reasoning and rationale behind how ‘steps’ get decided upon will pose unique challenges; both in terms of real-time rationale, and establishing guardrails and best practices that LLMS follow to develop complex chains of behavior to accomplish their goals.
It’s neither hard nor far-fetched to imagine a scenario where an AI Agent makes a decision that seems inexplicable, unfair, or just plain wrong to human operators, raising questions of transparency, trustworthiness, and ultimate reliability in the system.
When an Agentic AI system autonomously executes tasks and makes decisions, determining responsibility for outcomes, especially undesired, complex, or harmful ones, who then is to be held accountable if an AI system inadvertently causes harm or violates regulations – the creators, the users, or the AI itself?
Containment, Controllability, and Alignment have, in the past few years, often been derided as hypothetical risk factors. As Agentic AI systems deploy, containment, alignment, and control will become some of the most important governance factors. It’s still unclear if humans will be able to adequately control or align Agentic AI at all. Failure here could be catastrophic.
Misaligned Agentic AI systems (systems in which the values the model holds diverge from human values) run the risk of inadvertently prioritizing their objectives over baseline human values, which could lead to disastrous unwanted outcomes.
The most immediately contentious issue, perhaps, is the potential for labor displacement. If Agentic AI systems are capable of accomplishing tasks currently performed by human workers, then there may be massive potential for job loss and large-scale societal disruption. As such, this particular issue necessitates thoughtful management and likely policy interventions to ensure a smooth labor transformation and transition.
The advent of Agentic AI has several implications for us, as workers and leaders alike.
Many of these will be visible in the immediate future of 2023-2024, where we will begin to see an emergence of sophisticated “Conversational Agents”. These AI-powered entities won’t be perfect at first; interacting with them might be challenging due to the initial friction in perfecting their planning abilities. Our organization predicts that their capabilities will improve at a rapid pace, with advancements becoming more noticeable monthly.
The breadth of the applicability of these Agents will expand significantly as more software APIs are linked to the Agents themselves, meaning they will progressively be capable of operating within more software environments and tasks, further deepening their integration into our digital workspaces and lives.
Their usability will substantially improve; in tandem with their functionality. The custom interfaces we initially interact with Agentic AI will be replaced with native plugins for our familiar platforms, like Slack and Microsoft Teams.
Shortly following will be communication between humans and Agentic AI systems via Zoom and similar teleconferencing software using synthesized voices (and faces). This progression towards more interactive and accessible interfaces will make working with these agents feel increasingly intuitive, natural, and “human-like”.
This represents one of the most important facets of your future Agentic AI colleagues:
They will look like you. They will sound like you. They will ask good questions like you.
The emergence of that “human-ness” in Agentic AI won’t be accidental. As we navigate the Uncanny Valley, we’ll quickly come to understand how comfortable we really are with entities that are ‘nearly human’.
Will we prefer obvious unhuman AI Agents?
Will we prefer Agents with synthesized voices, faces, and preferred pronouns?
Will we prefer something in between?
Agentic AI’s conversations will become subtler, more nuanced, and more reminiscent of human interaction as their language models improve, reinforcing their roles as full-fledged colleagues in our teams.
These Agents won’t be static fixtures; they will evolve faster than any human teammate could, raising the bar for productivity and efficiency within the workplace.
The consequences of these rapid improvements are far-reaching to say the least, and as workers in this space, we must be prepared for this transformation, developing both strategic and practical plans to navigate this new dynamic with grace, dignity, and a spirit of innovation.
For business leaders, the rise of Agentic AI signifies a shift in the architecture of the workforce and business operations that is rich with opportunities.
Integrating Agentic AI into existing systems will be achieved with relatively low cost retooling. Rather than necessitating the overhaul of current infrastructure, these intelligent systems will be layered atop existing digital environments, minimizing costs while maximizing value.
Here’s how: nearly all contemporary productivity software (especially communication software) is API-driven. From the perspective of MS Teams as a piece of software, the difference between a human submitting a message to MS Teams, and an AI Agent submitting a message to MS Teams, is fundamentally indistinguishable. Messages are messages; whether they’re from humans or machines. This lowers the barriers to entry for Agentic AI substantially.
Once adequate LLM trust infrastructure is deployed, language becomes the universal interface for humans and machines to collaborate. Fortunately, our human communication software comes pre-defined for Agentic use.
From an operational standpoint, Agentic AI serves as an augmentative tool, enhancing the capabilities of human workers. An open question remains about (whether or not, or how) it supplants them.
Traditional narratives that AI can handle routine tasks, freeing up employees to focus on more complex, creative, or strategic endeavors may quickly need to change. As more knowledge work tasks fall within the scope of Gen AI capabilities, leaders will experience increasing pressure to define the synergistic relationship between humans and AI, fostering an environment where each can leverage its unique strengths to contribute to overall productivity.
Agentic AI unlocks new revenue streams. An Agent’s ability to do what workers do yet faster: analyze data, identify patterns, generate insights, and execute tasks: can be harnessed to create innovative products, services, or operational improvements that drive growth. As the system’s capabilities improve, the potential for revenue generation is only set to increase.
Most challengingly: Agentic AI paves the way for the emergence of a new class of worker, the “Synthetic Laborer”. These AI entities, capable of complex goal-oriented behavior, would be viewed as non-human colleagues that can perform tasks with precision and efficiency, round the clock, and without the traditional overheads associated with human labor.
When paired with the computability of digital cash enabled by smart contracts, the business landscape of 2024 increasingly looks like non-human entities in the workplace capable of not just being productive autonomous workers, but being party to contracts as well.
If the idea of countersigning a document signed by AI seems far-fetched, just wait until you have to negotiate prices with one.
Agentic AI is likely to not only streamline and augment existing business operations but also create new paradigms of productivity and revenue generation for organizations that adopt them.
Our responsibility as leaders lies in embracing these changes and steering our organizations towards adapting and integrating autonomous systems; defending our competitive edge without compromising on our values and the virtues that make human society great.
This is a prediction on Metaculus.com. Metaculus offers trustworthy forecasting and modeling infrastructure for forecasters, decision makers, and the public.
Absolutely. Use the button below to visit Metaculus and see the specific details of this prediction.