The rapid development of emerging technology and artificial intelligence (AI) offers opportunities to create a more equal, just and inclusive world, but only if governance systems move in concert with innovation.
To leverage the positive potential of AI, the multilateral system needs to utilize the tools already at its disposal. These tools include the capacity to set normative frameworks, institutionalize universal values, and facilitate collective action among diverse stakeholders. In particular, years of research, policy frameworks, partnership development, and programming to advance the sustainable development agenda (Agenda 2030) could become an asset to maximize the positive potential of the AI revolution. These longstanding efforts will not only benefit from new technologies, but they also offer tributaries by which the positive potential of AI can flow.
One such effort is the international movement to provide equal access to justice for all by reshaping the way we think about justice and promoting a people-centered approach to addressing actual legal needs (SDG16.3). A world where everyone has equal access to justice would naturally complement more equal and inclusive AI governance.
In fact, when designing new frameworks to regulate emerging technology, policymakers must use an approach that mitigates power imbalances at the individual and institutional levels, undercutting the risks of elite capture through increased accessibility of technology and its use.
At the global level, too often the multilateral system is one step behind the world’s next challenge, and ill-prepared to recover from misaligned incentives, power imbalances, and unfettered capture. Focusing on justice could help the system catch up.
This analysis makes a two-pronged argument:
- Strong, well designed legal systems, with the principle of equal access to justice for all at the center, could support a more balanced, inclusive, and equitable AI revolution, while increasing access to the benefits of technology to everyone. Therefore, the multilateral system should prioritize equal access to justice for all as a way to facilitate the positive potential of AI.
- Reciprocally, an inclusive and equitable AI revolution could increase equal access to justice for all.
Deepening Engagement with the Global Digital Compact
As we approach the one-year anniversary of the adoption of the Global Digital Compact (GDC), we should take stock of advancements made to achieve more coordinated and effective global digital governance.
The GDC is an attempt to set global digital governance objectives using the universality of the United Nations (UN). “The core of the GDC is to ensure that the digital revolution benefits humanity while safeguarding universal human rights and promoting equitable growth.” It is an important governing doctrine to coalesce an otherwise disaggregated AI governance system of “almost thirty UN-related bodies, conferences and fora dealing with digital issues, alongside numerous other international and regional intergovernmental platforms.” As my colleagues have pointed out, “while the Compact establishes a comprehensive, inclusive, and forward-looking framework with the potential to meaningfully transform digital governance, it also explicitly leaves some issues outside of its scope, deferring them to future revisions.” With the rapid advancement of digital innovation and digital governance, we cannot wait too long to explore what some of those issues are and how we can collectively address them.
Not only do governments need to implement the outcomes of the GDC, they also need to inject a sense of innovation, ownership, and new partnerships around the potential of AI’s prosocial impacts. Today, the AI revolution is being driven by a small handful of powerful private companies. The multilateral governance system must be more than a containment strategy. If the system positions itself with a primarily regulatory responsibility, this familiar flavor of catch-up multilateralism will limit public sector engagement to being reactive, rather than proactive. Being proactive pairs regulation with guidance for the development and application of innovation. Doing so will require a more inclusive multilateral agenda-setting approach.
The UN is making efforts to ensure national and regional AI governance systems are more inclusive, but these efforts are top-down, still nascent, and largely reactionary.
The 2024 Global AI for Humanity Report recognized the need to pair regulation with intervention, distribution, investment, and coalition building. Subsequently, the GDC set governance objectives and established underlying universal principles. In many thematic areas, we see efforts by UN bodies like the International Telecommunications Union (ITU) working to coordinate collective action across specific groups.
More recently, the UN Resolution on an AI Panel and Dialogue, led by Costa Rica and Spain, will support the creation of the Independent International Scientific Panel on Artificial Intelligence and the Global Dialogue on AI Governance. The former will focus on reviewing research on the opportunities, risks, and impacts of AI. The latter will provide a platform to “discuss international cooperation, share best practices and lessons learned, and to facilitate open, transparent and inclusive discussions on AI governance.’” This platform will focus on how to use AI to contribute to the achievement of the SDGs.
These two new mechanisms are promising and could not come soon enough. To facilitate proactive policy agendas alongside reactive governance schemes, member states, civil society organizations, and private sector stakeholders need to coordinate their priorities. There are high expectations that the new mechanisms provide space to achieve this coordination. Creating a collective voice among these actors can steer the digital revolution more meaningfully and in the right direction.
Where Does the Justice Sector Fit into Multilateral AI Governance?
Justice actors should be active participants in these new mechanisms because they can promote the use of a people-centered justice lens in their operations and outputs. Justice can complement social, economic, and political considerations of AI governance and innovation by ensuring accountability to individual rights and livelihoods, and providing people with the power to use legal mechanisms to maintain those rights and livelihoods when they are inevitably at risk. Making a people-centered justice approach to these mechanisms central will help enable the positive potential of AI, and mitigate its harms.
However, there is no clear agenda for the ways in which national justice systems, whether ministries of justice, the judiciary, ombudsmen, legal aid providers, private sector practitioners, or civil society organizations, nor international justice advocates, should be interacting with digital innovation. This is an overlooked opportunity.
Today, we see a relatively uncoordinated approach from various actors in the justice sector trying to understand how emerging technology will impact their work and how they might use it to accomplish their goals. Oftentimes, these conversations barely scratch the surface, are happening in isolation without the involvement of the private sector or other key stakeholders, and are limited to understanding what AI is, how it functions, or how it might replace legal tasks. The decentralization of justice conversations that we are seeing will quickly become a hurdle to driving inclusive social impact through the new UN mechanisms on AI.
Nevertheless, there is an overt opportunity and need to create more meaningful cross-sector dialogue between justice and technology experts. This is a relationship that could create benefits in both camps.
Improved Coordination Can Prevent Unequal Public Benefits
There is real potential for the application of AI and emerging technology to increase universal access to justice. People-centered justice experts have suggested the possibility of AI being used to scale mediation services, improve access to legal information, create more efficiency in case management, address bias, support legal empowerment, and even streamline dispute resolutions in a way that sits across informal and formal justice systems. Some have pointed out the potential of AI to support pretrial risk assessments or predict if accused defendants are likely to reoffend. Meanwhile, the OECD has established guidance for how governments can effectively harness AI for public sector benefits, including in the justice system.
Some national efforts to systemically embed AI into justice systems are already well underway. For example, the United Kingdom (UK) launched a Justice AI Unit which is set to oversee the country’s new AI Action Plan for Justice. The UK plans to adopt AI across courts, tribunals, prisons, probation, and supporting services with the goal to deliver “swifter, fairer, and more accessible justice for all.” Other countries have begun to adopt AI tools in more limited ways. For example, the Ministry of Justice in Spain is leading a project to use AI to streamline the processing of legal documents and the automation of repetitive legal tasks.
Even with burgeoning excitement and quick uptake of AI for justice, there is no central coordination mechanism to guide the evolution of applied AI in justice systems. Without coordination, the risk of increasingly unequal outcomes in justice service delivery between countries is likely to pervade. “As of 2024, 118 countries are not involved in any significant AI governance initiatives.” We live in a world in which the potential to use AI to improve access to justice is limited to those countries with 1) a seat at the governance table, and 2) equal access to the benefits of AI innovation. These are issues of power and finances.
Private Companies are Key Partners for Increased Access to Digital Platforms
Increasing unequal access to justice outcomes is not just a risk from nation to nation, but also from individual to individual. The Global AI for Humanity Report highlights that, “widening digital divides could limit the benefits of AI to a handful of States, companies and individuals.” The fewer the number of individuals connected to digital platforms, while governments invest more in digitizing public provisions and private companies invest more in digitizing goods and services, means the higher the likelihood that inequality will perpetuate indefinitely. This is why “Digital Public Infrastructure (DPI)—frameworks built and operated by governments that integrate digital identity, digital services and data exchange platforms, with the potential to drive societal and economic progress on a global scale,” plays a huge part in providing equal access to the digital evolution, and therefore public services like access to justice.
Some argue that the GDC, as well as the Global AI for Humanity report, “fail to acknowledge the deepening structural inequalities that are being amplified by intensifying global processes of digitalization and datafication.” When the uptake of new innovations fails to remedy preexisting systemic issues, then those innovations risk deepening the issues. Addressing the digital divide is a challenging proposition for the public sector of many developing countries to undertake alone at a time when international Overseas Development Assistance (ODA) has been thrown into flux by shifting priorities of federal aid agencies and budget cuts to ODA. Some private companies are raising their hands to be more involved in addressing this problem. They come with additional resources to support end-users once government objectives are established. Public sector parties need to be wary of the power they give away as this collaboration is not inherently altruistic. However, when incentives do align, as is the case with access to digital public infrastructure, then public-private partnerships should be harnessed.
For example, building out digital public infrastructure would increase a government’s ability to provide public services such as access to justice through digital tools, but it would also provide a private company with more end users digitally connected to their goods and services. Public actors, guided through multilateral platforms, can bring private actors into a regulatory environment of transparency, accountability, and responsibility, as partners on the implementation of applied AI and emerging technology. The private sector can help create innovative solutions to challenges like, “mitigating bias in AI and ensuring equitable access to digital services.” In this instance, the public sector is not just providing a regulatory environment, but actually getting involved in the design and application of emerging technological interventions.
Access to Justice as a Tool for AI Governance and Application
Digital innovation for the benefit of sustainable development is central to the GDC. Technology innovations could touch every corner of the Sustainable Development Agenda as envisioned in the GDC. However, policymakers should not only ask the question, “how will technology impact our development strategies?” without also evaluating the question, “how can our development strategies shape the direction of technology innovation?” People-centered justice offers reciprocal benefits for the governance and application of technological innovation.
The relationship of people-centered justice interventions—ones that use “people’s justice needs and lived experiences as a starting point for intervention”—and the desired outcomes of the GDC are symbiotic. People-centered justice can support an environment for the societal co-production of technology governance and AI application. As norms are established internationally and formed into laws through referendums, access to justice supports the likelihood that the benefits of those laws are experienced equally among a population. If rights-based technological innovation and application is the “what,” then equal access to justice for all is the “how.” It is in the interest of policymakers to look to justice interventions as a way to give operational teeth to the normative guidance of the GDC.
People-centered justice mitigates power imbalances at the individual and institutional levels and ensures accountability. Through tools like legal empowerment—supporting individuals’ ability to know, use, and shape the law—alongside effective (formal and informal) dispute resolution mechanisms, access to justice helps to disperse the power to use and wield technology equally. It can help to ensure that rights are upheld during the creation and application of new technology. In instances where those rights are not upheld, people-centered justice offers a means to hold institutions, and individuals, to account. Taken together, these tools help to undercut the elite capture of technology.
When Justice Systems Work Better, Technology Markets Work Better
When this works, when justice systems deliver with “certainty and equity” to prevent rights violations, people’s relationship with the state and businesses is strengthened. People know they have the tools to negotiate the use of new technology in their homes, communities, and countries. Effective justice strengthens trust in government institutions, and provides reliable grievance mechanisms as alternatives to general social unrest. Trust in government institutions supports the business environment and strong institutions support economies as a whole.
Rule of law and access to justice also provide “the foundation for democracy to thrive, refuting oligarchy and oppression, and promoting social wellbeing,” which reinforces positive business environments. Recent research shows that democratization causes an increase in gross domestic product (GDP) per capita by 20-25 percent, not to mention the economic cost of democratic decline. Finally, justice systems directly benefit private corporations by building trust within a marketplace and increasing legal inclusion which can create new business opportunities. Ultimately, equal access to justice can facilitate an enabling environment for public-private partnerships and strong economies in emerging technology and AI.
Recommendations
To make emerging technology and AI governance more than window-dressing policy, it needs to be given teeth to be applied at the individual level. People-centered justice is a tool already in the toolbox of the multilateral systems, it just needs to be better understood in the context of emerging technology and AI. Now is the time to cross-pollinate policy expertise and identify opportunities to apply the tool.
While the potential for emerging technology and AI to benefit global justice outcomes is clear, without interventions, the gap in access to digital infrastructure risks perpetuating the existing global justice gap and cementing inequalities between nations and individuals alike. Resolving this will require strategic interventions that distribute access to the benefits of technology. Some of these interventions will be found at the intersection of technology and access to justice.
The multilateral system, international organizations, national policymakers, civil society, and private companies can help facilitate the positive social impacts of emerging technology and AI by prioritizing a people-centered justice approach in evolving governance frameworks and AI application. Three key steps can facilitate this opportunity:
- Promote a multilateral platform on justice and AI (specifically) that prioritizes cross-sector discourse, collaboration, learning, and piloting. This platform can feed into the institutional processes heralded at the UN and the OECD, but it needs to be thematically explicit and inclusive in order to engage deeply with justice issues. This initiative can coalesce the currently disparate interventions at the AI-justice nexus. By acting as a knowledge hub, it can facilitate collective advocacy for the synergies of AI innovation, governance, and access to justice
- Open up to greater collaboration and understanding between public and private actors for the improvement of digital public infrastructure to facilitate equitable access to prosocial technology innovations such as AI for justice.
- Identify and invest in access to justice interventions that support positive AI and emerging technology outcomes. The goal of the GDC for applied AI and emerging technology, and the goal of Agenda 2030 for achieving equal access to justice for all can be mutually reinforcing. This relationship must be explored in both directions to achieve mutual gains.