Artificial intelligence is advancing faster than the institutions that govern it, because technological capability and governance authority are held by different actors with no stable institutional arrangement connecting them. Nearly 90 percent of the world’s most notable AI models in 2024 were developed by industry actors, up from 60 percent just a year earlier, and the capacity to build frontier systems remains concentrated among a small number of firms with the capital and infrastructure to do so. As a result, the decisions that shape AI capabilities and risks are taken inside private organizations, largely beyond the reach of public oversight. The central challenge of AI governance is therefore structural: authority defines rules, but capability determines outcomes. This paper examines the consequences of that gap. It is meant as a contribution to the Global Dialogue on AI Governance, which is by design a multistakeholder process. Addressing the mismatch requires new participation arrangements, clearer accountability criteria, and institutional designs that connect normative authority to operational reality.
The paper proceeds in three steps.
- The first part (“The Paradox”) examines the central tension heart of current AI governance: who controls the technology, and who sets the rules. It shows how existing frameworks are built on three flawed premises: a unified “private sector” that does not exist in practice, regulatory visibility that does not hold for frontier AI development, and participation mechanisms that fall short of the transparency effective oversight required.
- The second part (“The Problem”) analyzes the costs of this misalignment. It examines the risks created by a system in which public authority and private capability remain structurally disconnected, and the consequences of disengagement for both states and firms.
- The third part (“The Prospects”) evaluates what the current governance architecture can and cannot do. It evaluates the system against the AI Multilateral Governance and the Private Sector functions it is expected to perform, identifying where it succeeds, where it fails, and the specific risks produced by the gap.
This paper is diagnostic by design. It does not advance institutional solutions or operational models for private-sector engagement at this stage. Its purpose is to establish a clear analytical baseline, which is to define the functions AI governance must perform, and to identify where current arrangements fall short. Premature design proposals, in the absence of this shared diagnosis, risk reproducing the same structural weaknesses. A second forthcoming paper, drawing on consultations with private-sector actors and other stakeholders, will translate these findings into concrete institutional options.