Experts propose guidelines for the responsible use of AI in criminal justice, emphasizing transparency and fairness to ensure constitutional rights are protected.
In an age where artificial intelligence is permeating daily life, its encroachment into the criminal justice system prompts a challenging question: Can AI uphold fairness in critical, life-altering decisions?
AI is increasingly involved in tasks traditionally handled by judges and parole boards, such as predicting crime, analyzing DNA and recommending prison sentences. While AI systems have the potential to improve efficiency and objectivity, the stakes are high, raising pressing questions about fairness, transparency and accountability.
In April 2024, the National Institute of Justice (NIJ) issued a public request for information to guide future use of AI in justice.
The Computing Research Association convened a team of experts from academia and industry to respond, including Cris Moore, a professor at Sante Fe Institute, and Stephanie Forrest, a professor of computer science at Arizona State University.
Their principal argument centers on transparency, especially when constitutional rights are at risk.
“The idea that an opaque system — which neither defendants, nor their attorneys, nor their judges understand — could play a role in major decisions about a person’s liberty is repugnant to our individualized justice system,” the authors pointed out. “An opaque system is an accuser the defendant cannot face; a witness they cannot cross-examine, presenting evidence they cannot contest.”
Despite initial concerns about reducing the utility of AI with increased transparency, advancements in explainable AI offer promising solutions.
The researchers have developed methods to demystify AI processes, allowing better understanding while retaining usefulness.
The recommendations submitted to the NIJ in May 2024 cover several key areas:
- Transparency of Data and Process: Every stakeholder, whether using AI or affected by its recommendations, should understand the data and the reasoning behind AI-driven decisions.
- Specificity in Output: AI recommendations should be quantitative, offering clear probabilities rather than qualitative labels, which can be easily misinterpreted.
- Never Replacing Human Judgment: AI should not supplant human decision-making, especially where constitutional rights and detention are concerned. Instead, AI might serve as a digital consultant to assist judges.
“If the judge understands what the system’s output means, including what kinds of mistakes they can make, then I think they can be useful tools,” Moore added. “Not as replacements for judges, but to provide an average or baseline recommendation.”
The team followed up with a detailed opinion in the August issue of the Communications of the ACM, commenting on Executive Order 13859, which calls for the safe and trustworthy testing of AI while protecting civil liberties and American values.
In this context, Moore underscores the critical balance AI must achieve: improving fairness and transparency without overwhelming the judicial process.
“We should use AI if it makes the judicial system more transparent and accountable. If it doesn’t, we shouldn’t use it,” Moore said.
Moore compares the transparency needed in AI to systems like the Fair Credit Reporting Act (FCRA). While the FCRA requires disclosure of data used in credit decisions, it doesn’t compel companies to reveal the exact processes, merely the data. An analogous approach in justice could balance transparency with practicality.
In conclusion, the responsible use of AI in the criminal justice system lies in its ability to enhance, not obscure, the fairness and transparency of proceedings.
As Moore eloquently put it, “[W]e should always be prepared to explain an AI’s recommendation and to question how it was produced.”
Source: Sante Fe Institute

