Politicians on both sides of the aisle have encouraged legislation guiding prison reforms. It’s well known that many prisons suffer from overcrowding. Likewise, state budgets often suffer from the cost demands incarceration hold. In addition, the judicial process is often slow adding insult to injury. And there’s always that risk that a false verdict might be awarded. Given these issues, some are advocating for artificial intelligence in the courtroom to be used. And some are even encouraging the use of a robot judge.
This may sound far-fetched, but in reality, the digitalization of the courtroom is already here. In some states, money bail is being replaced with AI algorithms to determine risk. In other countries, a robot judge already decided more routine judicial cases. Without a doubt, there are a number of advantages to having artificial intelligence in the courtroom. But there are likely to be some notable risks as well. Within this brave new world, determining how these new technologies can best serve justice will likely be a major challenge.
“In a legal setting, AI will usher in a new, fairer form of digital justice whereby human emotion, bias and error will become a thing of the past. Hearings will be quicker and the innocent will be far less likely to be convicted of a crime they did not commit.” – Terence Mauri, Founder of London-based policy institute, Hack Future Lab
All Rise for the Robot Judge
Robotics is being used in a variety of settings today. Logistics and delivery robotics are nearly essential to be competitive in many sectors. Surgical robotics is also advancing rapidly. But a robot judge is not something many people saw coming. Despite this, these AI machines are already being utilized in some countries to handle judicial cases. Specifically, China has been employing artificial intelligence in the courtroom since 2017. A robot judge is used to hear specific cases such as trade disputes, e-commerce liability claims, and copyright infringements. To date, over 3 million cases have been handled by a robot judge in China.
In many cases, these types of cases never actually appear before an actual robot judge. For many, the artificial intelligence in the courtroom consists of a process where opposing sides upload legal documents. The AI device then analyzes the information and determines a verdict based on law and facts. Naturally, this saves a tremendous amount of time and human interaction. Backlogs of cases have declined as a result, which makes this type of system efficient and economical. But these types of cases are much more straightforward than others. Those involving human sentencing and verdicts are of greater concern.
Recently, reports in the United Kingdom suggest that a similar system should be in place with the next 50 years. A robot judge will be able to determine guilt or innocence using a variety of techniques. Data involving body language, hand gestures, eye movements, body temperature, and speech will be used in reaching a verdict. Reportedly, the artificial intelligence in the courtroom should be within 99% accuracy in its determinations. Here again, the time and costs saved in such a process make this system quite appealing. But when dealing with human rights and freedoms, not everyone is on board.
“Over the past few years, people have begun to understand what risk assessment tools are. And the more we explore them, the more we realize they’re a huge danger to the goals of the bail reform movement.” – John Raphling, Senior Researcher on Criminal Justice, Human Rights Watch
Artificial Intelligence and Bail Determinations
While a robot judge is not yet determining more serious crimes, judicial systems are starting to go down this path. We have already seen digital systems being used in COVID. Currently, in California, citizens will vote this November on a state referendum concerning a change in the bail system. Instead of traditional monetary bail, a risk assessment tool is being proposed to replace it. This algorithmic system involves the use of artificial intelligence in the courtroom to determine a person’s risk. Based on demographics, background, and criminal history, it is decided whether or not release is granted while awaiting trial. If the results are favorable, the accused is released. If not, they remain incarcerated.
The reason that many favor this use of artificial intelligence in the courtroom involves discriminations. Current monetary bail systems can discriminate on the basis of poverty. This places some individuals at risk of losing their jobs, home, and children’s custody if they cannot afford bail. Of course, there are always bail bondsmen, but these have costs down the road. In reality, some simply confess to crimes they didn’t commit simply to avoid the potential repercussions. Because of this, many advocate for the algorithmic system.
Unfortunately, artificial intelligence in the courtroom for risk assessment has potential flaws as well. Because these systems used demographics and past data to make these determinations, inherent biases exist. Higher rates of African-Americans are incarcerated than Whites, which could make an algorithm more likely to discriminate based on race. If so, then this will be no better than the existing monetary bail system. In fact, it could be worse. These expose the types of problems the judicial system faces when employing artificial intelligence in the courtroom.
“The idea that people are inherently risky needs to change. The problem with risk assessment tools is that everyone is ranked as having some kind of risk.” – Meghan Guevara, Executive Partner, Pretrial Justice Institute
A Major Work in Progress
If the reports from the United Kingdom are correct, it’s a good thing that we have 50 years to sort things out. Reports may suggest high levels of accuracy in determining truth and guilt. But in reality, these anticipated results may fall short. Likewise, judicial leniency in appropriate cases is an inherent part of the system that a robot judge may not grasp. And existing uses of artificial intelligence in the courtroom for release risk assessments are not ideal. In New Jersey, these algorithms have reduced incarceration levels for those awaiting trial by 27 percent. But it has not changed racial disparity data at all. This supports the notion that these systems can have inherent biases because of the way they determine results. Regardless, efficiency and cost savings will likely push these technologies forward in the courtroom. Hopefully, over the next several decades, we can sort out just how the digitalization of the courtroom will take place.
Want to make 2021 a better year than 2020? Then check out PROJECT BOLD LIFE: The Proven Formula to Take on Challenges and Achieve Happiness and Success.