
Artificial intelligence is often framed as a technology problem. In reality, it is increasingly a coordination problem.
Firms will adopt AI where it improves productivity. Governments will attempt to manage labour transitions and social risk after the fact. But when technological change accelerates, the gap between private incentives and social outcomes widens.
The result is predictable: underinvestment in workforce transition, fragmented infrastructure development, and delayed responses to systemic risks.
The economic challenge is therefore not simply how to regulate AI, but how governments and firms coordinate investment and risk sharing as labour markets evolve.
During a research fellowship with Convergence Analysis, I co-designed a framework exploring how partnership models may need to evolve as AI reshapes labour markets and economic risk. The central idea is simple: different types of uncertainty require different coordination mechanisms.
The coordination problem behind AI adoption
AI deployment creates several forms of uncertainty simultaneously. Firms face volatile returns from emerging technologies. Governments face uncertainty over labour market impacts. Both sides confront investments whose payoffs may only appear years later.
At the same time, many of the benefits of AI adoption are externalities. Training programmes, safety standards, or infrastructure investments often generate value that spills beyond the firm making the investment.
Economists have long recognised this dynamic. When externalities and uncertainty interact, markets tend to underinvest.
The usual response is policy intervention. But traditional tools such as subsidies or regulation often fail because they do not address the underlying coordination problem.
What is required instead are institutional mechanisms that align incentives between governments and firms under uncertainty.
A framework for partnership design
One way to think about this is to match the type of economic uncertainty with the type of partnership mechanism most suited to addressing it.
The framework below illustrates four broad categories of public–private coordination mechanisms.

Each addresses a different economic failure that emerges when transformative technologies reshape markets.
Risk-sharing mechanisms
Some investments involve long time horizons, volatile returns, or technologies that may quickly become obsolete.
In these environments firms often delay investment because the downside risks are too large relative to the private return.
Risk-sharing mechanisms address this problem by redistributing early-stage risk.
Examples include first-loss funds, co-investment structures, or government guarantees that reduce the financial exposure of private actors.
These mechanisms are particularly relevant for frontier infrastructure and experimental technologies, where uncertainty over future markets is high.
Outcome-based partnerships
In other areas the uncertainty lies not in whether investment will occur, but in whether it will succeed.
Training programmes illustrate this well. Governments may subsidise reskilling programmes, but it is often unclear whether they will actually lead to employment.
Outcome-based partnerships attempt to solve this by tying financial returns to measurable outcomes.
Mechanisms such as social impact bonds allow investors to fund programmes while governments only pay if results are achieved.
This approach shifts incentives toward measurable economic outcomes rather than programme inputs.
Risk-sharing mechanisms
Risk-sharing mechanisms
Risk-sharing mechanisms
Operating partnerships
Some investments require ongoing coordination rather than one-off financing.
Digital infrastructure, compute capacity, or shared research facilities are examples where long-term operation matters as much as initial construction.
In these contexts, operating partnerships such as public–private infrastructure concessions may be more appropriate.
These structures combine public oversight with private operational expertise while allowing contracts to adapt as technologies evolve.
Backbone organisations
Finally, some areas require ecosystem-level coordination rather than bilateral partnerships.
AI safety standards, shared evaluation infrastructure, and incident reporting systems are examples of problems where no single organisation has sufficient authority or information to coordinate the system.
Backbone organisations can fill this role by convening stakeholders, coordinating standards, and providing shared infrastructure.
In emerging technology sectors, these institutions often act as governance anchors for the broader ecosystem.
Why this matters for firms
For companies adopting AI, the economic question is not simply which technologies to deploy.
It is how adoption interacts with labour markets, regulatory expectations, and broader economic systems.
Firms that move quickly without addressing these coordination challenges may face:
• workforce transition bottlenecks
• regulatory backlash
• fragmented infrastructure development
• reputational risks associated with labour displacement
Conversely, firms that engage early with governments and ecosystem partners can shape the institutional frameworks that support sustainable adoption.
This often means designing partnership models that align incentives across actors rather than relying solely on internal strategy.
From economic theory to practical strategy
For policymakers, this framework provides a way to think systematically about which partnership structures match different economic risks.
For firms, it provides a lens for identifying where collaboration with governments or sector partners may accelerate adoption while reducing long-term risk.
In practice, this kind of analysis often begins with three questions:
Where are the key uncertainties in AI adoption within a sector?
Which externalities are preventing efficient private investment?
Which coordination mechanism best aligns incentives between public and private actors?
Answering those questions requires economic analysis that combines market structure, labour dynamics, and institutional design.
As AI continues to reshape industries, the organisations that navigate this transition most effectively will not simply deploy new technologies.
They will design new coordination models for the economy that emerges around them.
Read the report here: https://drive.google.com/file/d/1vB8EI0jvVLOhWf7WT6g4tq3aPbCETsL6/view
ABOUT
RG Economics provides independent economic, quantitative and AI advisory to support high-stakes strategic decisions in dynamic market environments
SUBSCRIBE