Array
(
    [thumbnail] => https://s42831.pcdn.co/wp-content/uploads/2025/12/getty-images-dRbS5SxLMAs-unsplash-150x150.jpg.optimal.jpg
    [thumbnail-width] => 150
    [thumbnail-height] => 150
    [medium] => https://s42831.pcdn.co/wp-content/uploads/2025/12/getty-images-dRbS5SxLMAs-unsplash-300x204.jpg.optimal.jpg
    [medium-width] => 300
    [medium-height] => 204
    [medium_large] => https://s42831.pcdn.co/wp-content/uploads/2025/12/getty-images-dRbS5SxLMAs-unsplash-768x522.jpg.optimal.jpg
    [medium_large-width] => 768
    [medium_large-height] => 522
    [large] => https://s42831.pcdn.co/wp-content/uploads/2025/12/getty-images-dRbS5SxLMAs-unsplash-1024x696.jpg.optimal.jpg
    [large-width] => 1024
    [large-height] => 696
    [1536x1536] => https://s42831.pcdn.co/wp-content/uploads/2025/12/getty-images-dRbS5SxLMAs-unsplash-1536x1044.jpg.optimal.jpg
    [1536x1536-width] => 1536
    [1536x1536-height] => 1044
    [2048x2048] => https://s42831.pcdn.co/wp-content/uploads/2025/12/getty-images-dRbS5SxLMAs-unsplash-2048x1393.jpg.optimal.jpg
    [2048x2048-width] => 2048
    [2048x2048-height] => 1393
    [gform-image-choice-sm] => https://s42831.pcdn.co/wp-content/uploads/2025/12/getty-images-dRbS5SxLMAs-unsplash-scaled.jpg.optimal.jpg
    [gform-image-choice-sm-width] => 300
    [gform-image-choice-sm-height] => 204
    [gform-image-choice-md] => https://s42831.pcdn.co/wp-content/uploads/2025/12/getty-images-dRbS5SxLMAs-unsplash-scaled.jpg.optimal.jpg
    [gform-image-choice-md-width] => 400
    [gform-image-choice-md-height] => 272
    [gform-image-choice-lg] => https://s42831.pcdn.co/wp-content/uploads/2025/12/getty-images-dRbS5SxLMAs-unsplash-scaled.jpg.optimal.jpg
    [gform-image-choice-lg-width] => 600
    [gform-image-choice-lg-height] => 408
)
					

Building Trust in AI through Justice

Blog

The world is facing a global trust deficit. 44 percent of people living in OECD countries—mostly high-income, democratic, free market economies—have low or no trust in their governments. At the same time, artificial intelligence (AI) is spreading faster than any technology in human history, and its rapid development and adoption in public services risks deepening this crisis of confidence. Today, more people around the world are concerned about the increased use of AI than excited about it.

If we want AI in public services to strengthen—not weaken—trust in government, we must first understand how people experience these systems. Only with those insights can we then identify and address the practices that undermine trust. As one of the most frequent points of contact between governments and the public, the justice sector is a good place to start. 

Millions of people interact with the justice system every year, and AI is increasingly embedded in these interactions. Yet the use of AI in justice processes raises serious concerns about transparency and accountability—two pillars of trust in public institutions. This means that AI in the justice system is not only a sector-specific issue; it has far-reaching implications for those working to safeguard trust in democratic institutions. A misuse of AI that fails to deliver fair and inclusive justice outcomes will end up undermining people’s confidence in governments’ capacity to incorporate new technologies, especially AI, when delivering essential services.

Why Effective Justice Can Support Trust 

Legal problems occur more frequently than we might think. In the United States, for example, a 2021 study by the Legal Services Corporation found that two-thirds of Americans faced at least one legal problem in the previous four years, and 55 million Americans experience 260 million legal problems annually.

When people seek legal remedies, they are asking the government and tribunals to uphold the values and norms that govern interpersonal relations and hold societies together. This implicit contract is foundational to a strong democracy. As argued in the flagship report on justice of Pathfinders for Peaceful, Just and Inclusive Societies, justice “provides a framework for positive interaction between people, and between people and businesses and the state.” This makes the justice sector a natural place to build public trust.

The Organisation for Economic Co-operation and Development (OECD) has noted that “trust is perhaps the most crucial aspect in the justice system; [as] decisions can profoundly impact individuals’ lives and societal trust.” It further observes that declining trust often stems from doubts about the accountability and transparency of public institutions and their responsiveness to public participation. A people-centered justice approach can help to close this trust deficit, as effective, open, and accessible justice systems are designed to address these concerns precisely.

People-centered justice systems improve accountability and transparency by putting people’s needs at the center of justice interventions. In doing so, people-centered justice supports inclusive democratic societies grounded in respect for human rights, freedoms, and the rule of law, as well as in strong, independent, accountable, and transparent institutions. However, the rapid adoption of AI in the delivery of justice, without a people-centered approach, puts these benefits at particular risk. 

Fast-Tracking AI Use in Justice Systems

AI’s influence in legal systems is growing rapidly. A review of 200 use cases by the OECD shows that justice administration and access to justice is among the most popular domains where governments are deploying AI in public services. A 2023 UNESCO survey found that up to 44 percent of judicial operators have used AI for work-related activities. At the same time, private-sector investment in legal technology is skyrocketing—over USD 1 billion was invested between mid-September and mid-October of 2025 alone—highlighting the exponential growth of AI’s influence in the private legal system.

New AI tools promise to transform how people experience the law. Legal experts are excited about promising opportunities for AI to improve efficiency, support legal empowerment, and increase access to justice. AI’s potential to streamline document reviews to remove case backlogs, improve access to legal information, and help identify systemic bottlenecks are just a few examples. However, concerns remain.

The justice sector is facing a rapid expansion of AI use without the regulatory oversight needed to guide it. Of the 44 percent of judicial operators who used AI in 2023, only 9 percent had received guidance on how to do so responsibly. The use of AI in justice systems remains inconsistent, and the risks of misuse, bias, discrimination, and inadequate oversight are widespread.

 

Regulations increasingly lag behind technological change, and many countries lack the infrastructure to build local large language models (LLMs)—systems that process and draw on vast quantities of legal and textual data to generate responses to human inquiries. Without local LLMs that take into account applicable laws and regulations and reflect community realities to prevent bias, risks of discrimination, and error-laden outputs persist. Combined with the absence of clear principles and guardrails, these gaps threaten to undermine the benefits of AI to justice systems and actually risk creating new justice problems.

These challenges are further compounded by the absence of coherent, comprehensive global guidelines. Without them, we risk fragmentation in how different regions respond to the ad hoc use of AI in their legal systems. Current governance interventions leave room for improvement. In Latin America, for example, some countries are modeling their policies on the European Union’s regulatory framework, but localization remains limited. As Argentine digital-rights expert Franco Giandana said, “the language is too abstract, and there’s still little grasp of the national and regional challenges… not just to regulate AI but to build a coherent development strategy suited to our context.”

In their recent book, Power and Progress: Our 1000-Year Struggle Over Technology and Prosperity, economists Daron Acemoglu and Simon Johnson argue that “what you do with technology depends on the direction of progress you are trying to chart and what you regard as an acceptable cost.” However, the justice sector is not taking up an outcome-oriented logic. Rather, we are looking for problems that AI can solve, not designing AI to solve our problems— and AI may not be the answer in every case. Responsible and beneficial use of AI in the justice sector will require form to follow function.

What should be the function of AI for justice? AI should be used to advance fair justice outcomes for all without undermining public trust or weakening democracy. But without deliberate intervention, the rapid, uneven uptake of AI risks weakening trust in justice systems and, by extension, in our governments.

Putting Public Trust at Risk

When the use of AI in justice systems is driven not by fair outcomes for people but by procedural outcomes focused on justice systems (i.e., efficiency gains), there are often unintended consequences for justice users. In 2025, a growing number of reports have emerged about the risks posed by AI in justice systems. Among them, evidence of bias, discrimination, and blatant errors—often without adequate accountability, transparency, or contextualization—pose critical risks to public trust.

Bias and Discrimination

Bias and discrimination are often cited risks when using AI in justice systems. For example, Margaret Satterthwaite, the United Nations (UN) Special Rapporteur on the Independence of Judges and Lawyers, recently warned that judges are particularly concerned about the ways “AI could undermine public trust in judicial systems by introducing errors, hallucinations and biases, by exposing or monetizing private data, or by subverting the right to a trial by a human judge.” One example she highlights is the use of biased algorithms in case-allocation systems. AI models mandated to allocate cases could assign “cases against the government to pro-government judges, or cases against businesses to pro-business judges—risking elite capture.” Bias can also be embedded in AI systems long before they reach the courtroom. As the OECD notes, training AI models on historically biased data risks perpetuating that bias without any mechanism for redress.

When efficiency is the goal rather than fair outcomes, an overreliance on AI that overlooks risks of bias can lead to discrimination. Satterthwaite cautions that biased AI in criminal legal systems, for example, can lead to discriminatory decisions and unequal treatment. This is a concern that is especially acute with predictive AI. Predictive AI models use historical data to make probabilistic predictions about the future. As computer scientists Arvind Narayanan and Sayash Kapoor argue in AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, predictive AI is often attractive because it promises efficiency, but “efficiency is exactly what results in a lack of accountability.”

Errors without Accountability

A lack of accountability is also a key challenge, given that AI systems are prone to errors. As Narayanan and Kapoor point out, when evaluating the likelihood of recidivism, most people designated by predictive AI as high-risk do not actually commit another crime. In some cases, these inaccuracies are compounded when they disproportionately affect certain groups. In the United States, for example, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool, used by some jurisdictions, was found to be “nearly twice as likely to misclassify Black defendants by predicting them to have a higher risk of recidivism than white defendants.”

While more prevalent, this challenge is not limited to predictive AI, however. Generative AI, which generates content from a database in response to human prompts, also poses risks when used in legal settings. Satterthwaite emphasizes that “AI research tools by LexisNexis and Thomson Reutershallucinate between 17 percent and 33 percent of the time.” Similarly, an audit of Legal Aid of North Carolina’s Legal Information Assistant (LIA) chatbot revealed that it sometimes provided inaccurate or contradictory legal information depending on the prompts it received.

Without accessible, reliable, and preventive accountability measures, error-laden legal tech risks further eroding trust between justice users and the legal system.

Transparency and Human Oversight

Transparency is another key concern when integrating AI into justice systems, as people struggle to trust technologies they cannot understand. The OECD’s review of AI in justice use cases highlights that some AI systems:

“…operate as black boxes, with their underlying methodologies, weighting of factors and potential biases shielded from public scrutiny. The proprietary nature of certain AI systems or lack of external audits prevents defendants from understanding or challenging the risk scores that influence decisions, raising concerns about due process and fairness in judicial decisions…When AI systems are not transparent, it becomes challenging to understand the rationale of decisions.” 

As Satterthwaite points out, when people cannot review the basis of decisions, human input is undermined, and the right to a fair trial is put at risk. Further, she warns that influence on the development and deployment of AI tools, whether from public agencies or private entities, undermines the independence of the judiciary and violates that right. Oftentimes, this influence is not readily apparent to justice users.

Looking beyond the courtroom, removing human discernment in enforcing the law also risks depriving justice systems of key tools for maintaining social stability. For example, in weak institutional environments, the ability of law enforcement (i.e., judges, police) to decide whether to enforce a particular law allows for considerations of fairness and social implications, especially when the law may disproportionately affect certain populations.

These are just a few examples from recent research, and they are not comprehensive. They do, however, directly demonstrate threats to the core drivers of public trust in institutions: accountability and transparency, and underline the need for people-centered, purpose-driven, and guided AI use in the justice sector.

A Coalitional Platform for Justice and AI

The justice sector needs a more robust dialogue to advance a cohesive, principled, and people-centered justice AI agenda. Dialogue can inform an agenda to be championed in key AI governance fora, establishing standards for AI in justice delivery, ensuring tools exist to uphold those standards, and harnessing the promise of justice systems to support trust in public services. The groundwork for this exists already, andsolutions are not hard to come by; they simply lack coordination, cohesion, packaging, and integration into broader AI governance efforts.

The justice sector can also draw from responsible AI practices emerging in other sectors. Tools such as bias assessments, transparency assessments, algorithmic assessments, and registries of algorithmic systems, promoted by leaders like Maria Paz Hoermosilla Corneja of Adolfo Ibáñez University’s public innovation lab Goblab UAI, offer mechanisms to identify and mitigate harms. Meanwhile, the human rights sector has a growing set of coherent, coordinated guidelines regarding AI. For example, the Office of the United Nations High Commissioner for Human Rights (OHCHR) hosts the United Nations Hub for Human Rights and Digital Technology, which collates resources for translating human rights to the digital space, including a focus on AI.

Similar conversations are unfolding in the health and education sectors, which are also confronting issues of responsible AI use. Cross-sector exchanges could lead to mutually beneficial outcomes. Yet without a dedicated space to exchange ideas and best practices, these tools are not collectively institutionalized within international justice principles for AI and remain largely disconnected from mainstream discourse on global governance platforms.

To turn these priorities into realities, the justice sector needs an outcome-oriented coalitional platform on justice and AI. This platform can support the exchange of best practices, identify and research pressing issues, establish collective frameworks, and set standards for the development and deployment of trustworthy AI in justice systems. This coalition can provide guidance on a people-centered approach to the use of AI in justice systems. Based on this foundation, the justice sector can better advocate and integrate into evolving AI governance frameworks at both national and global levels.

Charting a Path Forward: Justice, the Social Contract, and the Age of AI

Global trends in AI reflect an under-governed, sometimes untargeted approach that carries significant risks, especially for the justice sector. As Acemoglu and Johnson argue, “what we are witnessing today is not inexorable progress toward the common good but an influential shared vision among the most powerful technology leaders… focused on automation, surveillance, and mass-scale data collection, undermining shared prosperity and weakening democracies.” Blind faith in technology and over-focus on automation will not strengthen the relationship between people and the state. Instead, as they contend, it risks “amplif[ying] the wealth and power of this narrow elite, at the expense of most ordinary people.”

If governments want to strengthen relationships between people and the state and ensure that AI in public service delivery does not deepen distrust, they need to focus on how people build trust with their government in the first place. Justice systems should be a focus. As long as innovation outpaces the implementation of principles, governance, and regulatory frameworks, the justice system risks worsening the public trust deficit and undermining democracy.

A bottom-up, people-centered approach offers a critical path forward amid rapid technological change. This should be a priority for civil society, philanthropy, and government leaders concerned about public trust and strong democracies. We need to start prioritizing justice in AI governance. And to do so, we need to establish a coordinated AI justice agenda.

Related Articles on Digital Rights and Technologies

Stay Connected

Subscribe to our newsletter and receive regular updates on our latest events, analysis, and resources.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.