Mission Briefings

The ISDO Cycle: yes, everything in AI is speeding up.

Why AI capabilities and democratization are accelerating.

The ISDO Cycle is a model to explore how and why LLMs are improving and proliferating.

The ISDO Cycle is a conceptual framework that describes how and why AI capabilities improvement and proliferation - especially Large Language Models (LLMs) - appear to be accelerating.

ISDO is a four-stroke process: Innovate, Scale, Democratize, and Optimize. It encapsulates the self-reinforcing cycle driving the current Cambrian explosion in AI technology - especially incorporating the open source community.

The cycle is a heuristic for understanding the synergistic interplay between model architecture innovation, compute scaling, open access and dissemination, and optimization techniques that drive model resource requirements down while retaining capabilities.

Each component not only contributes to the cycle individually but also amplifies the effects of the other components, thereby accelerating the overall rate of progress.

How ISDO works.

The Innovate phase in ISDO captures novel model architecture release and new capabilities. It primarily manifests through the introduction of novel model architectures, meta-architectures (like Mixtures of Experts), and architectural subsystems (such as, in the transformer, innovations in feedforward layers, encoders and decoder arrangements, and attentional mechanisms). The Innovate phase sets new baselines for enabling new capabilities.

It often involves interdisciplinary insights, borrowing from its foundational fields like neuroscience and other cognitive sciences, to reset the SoTA for what ML models can achieve. The Innovate phase is dominated by Frontier Model labs.

The Scale phase transforms innovative architectures into high-performing models. This often happens at the exact same time as the Innovate phase. It involves techniques like increasing the number of parameters, tokens, and training epochs. Scaling not only improves the model's performance on existing tasks but also often enables the model to generalize to tasks it was not explicitly trained for – to gain new capabilities. The Scale phase follows fast from (if not concurrent with) the Innovate Phase.

A caveat: The relationship between scale and performance has, historically, held up (where model loss scaled with [at least] training and parameters) - though Q2/Q3 2023 data suggests that the “Scale is all you need” adage is breaking in two ways: (1) SoTA model performance in GPT-4 appears to have been accomplished using a Mixture of Experts approach, and (2) smaller models (as low as 1B parameters) with better training sets appear to reach or exceed SoTA benchmarks. It is arguable that these new small parameter models are actually just examples of speed-running the Democratize-Optimize phases.

The Democratize phase is characterized by the re-creation of previously closed-source model capabilities by the open source community; immediately proliferating what were previously restricted capabilities.

This dissemination and proliferation of SoTA is primarily driven through a communication pipeline of pre-print paper release (like arXiv), model codebase and weights (Github and Hugging Face), and broadcast on Twitter. This open-access model allows for rapid peer review, replication, and adaptation by the broader scientific community.

The democratization process is vital for the proliferation of AI technologies as it enables what is effectively a decentralized network of researchers and engineers to coordinate and contribute to the iterative improvement of models and architectures; accelerating the overall rate of innovation by increasing the number of practitioners participating.

The Democratize phase is slower than the Scale phase, and there is usually a time delay between the Scale and Democratize Phase. We predict this phase delay will decrease as (1) worldwide upskilling on ML techniques yields more skilled practitioners capable of doing the work, (2) more effective execution environments and user-friendly ML tooling makes it easier for people to become involved in this work, (3) the cost X availability of cloud compute and GPU access continues to make high-end compute facilities available to more people at lower cost.

In the Optimize phase, now-open source models are made more efficient without material compromises in model quality. What used to take 70B parameters takes 30B. What used to take GPU access at $30/hr takes $3/hr. The Optimize phase, in conjunction with the Democratize phase, is really what puts increasingly powerful AI capabilities in everyone's hands.

In 2023, the Optimize phase has been dominated by techniques like parameter quantization, parameter pruning, and architectural innovations that decrease the computational requirements and GPU memory footprint of models. This makes the models (1) more effective at production inference workloads by lowering the total GPU requirements for service, (2) more well-suited for deployment on edge devices and local compute, and (3) reduces the environmental impact of inference.

Worth noting; when the Optimize phase works properly, models do not suffer material reductions in inference quality. Which is, to most, the most striking part of the Optimize phase. Models with previously high compute and memory requirements lower in requirements, while still scoring qualitatively similarly on common performance benchmarks.

The Optimize phase usually follows quickly after the Democratize phase. Once models have been released to the open-source community (Democratize), they’re quickly uptaken by that community and re-engineered for more optimized performance. Optimization often feeds back into the innovation phase, as the insights gained can inform the design of new, more efficient architectures.

Why ISDO happens.

The ISDO Cycle is not a random occurrence, but a phenomenon driven by positive feedback loops between the driving forces in each phase. Technological advancements in LLMs create new opportunities for economic value, which in turn attract capital investment - and repeat; this capital is then funneled back into research and development, further fueling technological innovation.

The cycle is self-perpetuating and accelerates over time due to strong incentive structures, ranging from the pursuit of academic recognition to the commercialization of LLM applications. Successive iterations disseminate new and more capable LLM systems, which in turn attract both more capital and practitioners to the process.

The alignment of these incentives ensures that each phase of the ISDO Cycle is not just a step in a linear process but a catalyst that amplifies the entire cycle, and we have seen this cycle play out many times and at an accelerating pace in the last few years alone.

Who drives ISDO.

The ISDO Cycle is propelled by an ecosystem of stakeholders, each contributing to different phases of the cycle.

Frontier Model labs (Google, OpenAI, Microsoft, Anthropic, Meta, Stability) are often the initiators of the Innovate and Scale phases. Substantial R&D budgets and GPU access drives breakthroughs in fundamental model architecture and large-scale implementations.

Their work sets the stage for the open-source community and organizations built around it. We’d be remiss not to h/t Eleuther.ai and Nous Research for their work in model democratization. These working groups and collectives democratize access to LLM technologies, enabling a broader range of contributors to participate in the development process. Notable individuals like Tom (The Bloke) Jobbins and Georgi Gerganov who specialize in optimization techniques like quantization, play key roles in making models more accessible to the public and more efficient to run on consumer hardware.

Most recently, Venture Capital firms like A16Z serve as both the benefactors and beneficiaries of this cycle as well. Traditionally, they invest in cutting-edge LLM technologies and provide the financing required for scaling and for innovation. More recently they’ve started writing grants for the open-source community Democratize / Optimize phase as well; recognizing their pivotal role in driving the ISDO Cycle. This dual investment strategy not only maximizes their potential returns but also sustains the cycle by ensuring that both innovation and democratization are adequately funded.

Why ISDO matters.

ISDO is why everything in AI is speeding up

The ISDO Cycle seeks to explain the rapid pace of advancements across the LLM landscape. From research breakthroughs to the deployment of practical applications, the cycle encapsulates the root cause driving this acceleration through a series of feedforward cycles. It provides a structured framework to understand why LLM technologies are not just evolving, but doing so at an increasing speed; successive iterations through the cycle drive model capabilities as quickly as they do proliferation - drawing in practitioners and capital alike for catalytic use in the next run through the cycle.

While the ISDO Cycle has been an underlying force propelling LLMs, the cycle itself is undergoing changes that make it even more effective. Technological advancements are making each phase of the cycle more efficient, and the influx of capital is amplifying these effects. As a result, the ISDO Cycle is not just a constant cycle; it's a cycle whose rate of acceleration is itself increasing, altering the landscape even as we navigate it.

Frontier Model capabilities are proliferating faster

Frontier models are not isolated phenomena, and seemingly possible to adequately contain; their capabilities quickly become the industry standard due to the ISDO Cycle. The Democratize phase ensures that their emergent skills and reasoning abilities are disseminated widely, raising the bar on the SoTA. This rapid proliferation makes these models the yardstick against which future innovations are measured.

We predict there will be an open-source AI model comparable to GPT-4 by early 2024

Based on our (back of napkin, grug-brained) analysis of the current SoTA and execution speed of the ISDO cycle, Mission Control is predicting that we will see open-source models with capabilities comparable to the March 2023 release of GPT-4 by early 2024. The emergence of such models will further accelerate the ISDO Cycle by contributing new perspectives and optimizations.

There's a self-fulfilling Hyperstition for AI and hardware investment

The ISDO Cycle is, in a sense, hyperstitional: it describes a self-fulfilling process where the belief in the potential of LLM drives investment in hardware and research. This investment then facilitates this cycle-within-the-cycle, helping to make the envisioned LLM capabilities and necessary hardware a reality. Hyperstition serves as both a cause and an effect within the ISDO cycle which makes it a unique driver of LLM progress, as every other cause in the loop is an isolated factor.

An increase in ISDO cycle conditions/pace per author IJ Good's concept of the "Intelligence Explosion"

The ISDO Cycle can be viewed as a modern manifestation of IJ Good's concept of the "Intelligence Explosion," where the conditions for AI development are not just improving but doing so at an exponential rate. This concept aligns with Good's prediction that smarter systems will create even smarter systems, in a recursive cycle. The ISDO Cycle is in-essence the core driver of this concept, making it critical for understanding the trajectory of AI development and what can be expected as LLMs themselves become more tightly integrated into the ISDO Cycle.

AI Governance is not getting any easier

The accelerating pace and increasing complexity of LLMs, both driven by the ISDO Cycle, pose significant challenges for governance. Regulatory frameworks struggle to keep up with the rapid advancements and the ethical implications these advancements introduce into the discourse. Governance, Risk, and Compliance software built around identifying and mitigating the various risk factors around LLM training, development, and use also suffer, as no approach is both broad enough and accurate enough to effectively govern the numerous current and future LLM use cases. The ISDO Cycle not only accelerates technological progress but also increases the urgency and exacerbates the complexity of establishing effective governance mechanisms.

The ISDO Cycle helps model the acceleration dynamics that we’re witnessing in LLMs; it captures the interplay between innovation, scaling, democratization, and optimization, and how these elements create a self-reinforcing cycle of rapid progress. The framework highlights the root causes behind the pace of advancements, but also the ever-changing dynamics of the cycle. As we've seen, the ISDO Cycle is not static; it's a dynamic system that has increasingly broad ramifications that affect both the technology itself and the larger ecosystem, including governance and investment strategies, and even civil society itself.


Request a demo of Mission Control GenOps.

Find out how AI clarity and trust drive mission success for your team.

Screen Shot 2023-08-27 at 3.55.21 PM

Mission Control is The AI Responsibility Lab Public Benefit Corporation.

© 2023-2042