The American criminal-justice system's sentencing system is among the fairest and most equitable in the world ... assuming you're wealthy, white and male. Everybody else is generally SOL. During the past three decades, America's prison population has quadrupled to more than 2.3 million people. Of those incarcerated, 58 percent are either black or Latino (despite those groups constituting barely a quarter of the general US population). The racial disparity in America's justice system is both obvious and endemic, which is why some courts have started looking for technological solutions. But can an artificial intelligence really make better sentencing recommendations than the people who designed it? We're about to find out.
Human judgment can be dangerously fallible. That's why, as the ACLU found in 2014, black and Latino men are not only more likely to go to prison than their white counterparts, their sentences are nearly 20 percent longer. And the more severe the crime, the bigger the disparity. In 2009, black Americans made up 13 percent of the country's population yet constituted 28 percent of all inmates serving life sentences and more than 56 percent of both those serving life without parole and those serving a life-without-parole sentence for a conviction that occurred when they were kids. What's more, studies have shown that anything from when an official last ate to how well the local sports team is performing can generate wild swings in how sentencing decisions are made. And that's where the cold, calculating and empirically-based logic of AI is supposed to come in.
A study out of Cornell University from February suggests that, at least in decisions of whether or not to grant bail, AI may provide a significantly fairer alternative to human judges. In its machine-learning policy simulation, the Cornell team calculated that such systems can cut crime rates by 24.8 percent without increasing jailing rates by denying bail to the most dangerous offenders. Even more impressive, they may be able to reduce the US prison population by a whopping 42 percent without affecting the crime rate by releasing arrestees with minimal likelihood of committing more crimes. But the key is that these benefits would extend equally to blacks and Latinos as they do to whites.
But AI is not the legal silver bullet that some had hoped for, at least not yet. Take the case of Eric Loomis v. Wisconsin. Loomis had been convicted of fleeing the police in a vehicle and sentenced to six years in prison. The duration of his sentence was influenced by his "high-risk" status, which was determined by Compas, a risk-assessment program employed by the court. The problem is, nobody knows how Compas works, save for Northpointe Inc., the company that sells it. The software is proprietary, its algorithm opaque, and the way in which it weighs various factors in its decision-making has been ruled a trade secret.
There is simply no means of legally coercing Northpointe into divulging how its software works. Loomis tried, arguing that his legal team should be able to examine the software and challenge the validity of its recommendations all the way up to the Wisconsin Supreme Court (SCOTUS declined to review the case in June).
The court eventually ruled against Loomis, reasoning that the software returned the same result a human judge would have, given Loomis' actions and criminal history. However, in that decision, Justice Ann Walsh Bradley noted a ProPublica study from 2016 that found black defendants in Broward County, Florida, "were far more likely than white defendants to be incorrectly judged to be at a higher rate of recidivism" by the software.
"This study and others raise concerns regarding how a Compas assessment's risk factors correlate with race," Bradley wrote. Therefore, it should be employed only to provide "the sentencing court with as much information as possible in order to arrive at an individualized sentence" rather than be the deciding factor itself.
So here we are, with human judges who can't seem to stop stepping on their own prejudices (however subconscious) and closed-source sentencing software that can't be trusted by the public. Luckily, Montgomery County, Ohio, appears to be taking Justice Bradley's advice and developing a hybrid solution to better serve its community.
Montgomery County Juvenile Court Judge Anthony Capizzi has teamed with IBM to adapt the company's Watson AI system to work within the judicial system. It's part of a pilot program aimed to help judges better understand the nuances of a kid's home life and, in turn, make better decisions in regard to his or her care.
Judge Capizzi sees around 30 cases in an average docket, with only about six minutes to spend on any individual kid. There's certainly no time to be fumbling with paperwork. That's why Capizzi's court is leveraging Watson's cognitive abilities to develop a digital case-file system that surfaces all of the most relevant information to the case at hand.
"This is really a care-management system," Capizzi told Engadget, "the distinction being that a case-management system tells the court what's happened in the past, what's going on with that child, that family, that court over the last three, five, 20 years. It doesn't give you any indication of their capability for the future."
With this system, however, the judge is afforded a more-complete view of the child's life, her essential information displayed on a dashboard that can be updated in real-time. Should the judge need additional details, he can easily have it pulled up. "If I have 10 care providers in my region, can Watson tell me -- because of where that child lives, their educational background, their limitations, their family -- is there a better one for that child versus the nine others?"
But it's not as though Capizzi blindly follows Watson's recommendations. He points out that when making decisions in child-custody matters, he's already receiving a number of competing recommendations -- from law enforcement, probation officers and mental-health providers. "In the end, the judge makes the decision -- I make the tough call," he said.
"It gives me a better ability to synthesize what I know," he explained. "It allows me to learn information quicker and in a concise way. It gives me the ability to read hundreds of law-review articles, maybe thousands of law-review articles in a matter of a day or two. ... Watson can do that better at this point than any one or two or three individuals."
Capizzi expects Watson's computational ability to be fully realized within the next 18 months. As more and more information is fed into the system, Watson should begin returning increasingly accurate recommendations, which should help foster trust in the system. The eventual goal is to apply the digital case-file system across all 88 of Ohio's counties and potentially serve as the model for a national program. And not just for juvenile or family courts. Capizzi envisions a day when every criminal court has access to this sort of technology. "The courts are only successful, I think, if they have the broadest, most unbiased information available to make decisions," he said.
Granted, there is still a danger of Watson becoming biased based on the information being fed to it -- just as is the case with any machine-learning system. However, Capizzi is steadfast in his belief that by combining the relative strengths of humans and AI, not only will the courts operate more efficiently, they'll be able to markedly improve the service they provide to their constituents.
"Courts have to change," Capizzi concluded. "Technology is changing every aspect of life, and I really think this gives our courts a more efficient way to get work done."