Skip to main content
opinion

David Butt is a Toronto-based criminal lawyer

Eric Loomis was the wheel man in a 2013 drive-by shooting in La Crosse, Wis. When caught, he plead guilty to lesser offences. That was the easy part. The hard part was deciding a fit sentence for him.

During a sentencing, criminal courts routinely consider the risk the offender poses to society at large: Typically, a bigger risk means a longer the sentence. The Wisconsin court did consider Eric Loomis's risk of reoffending, but with a profound twist. The court relied on artificial intelligence to assess that risk. And when AI determined the risk was high, Mr. Loomis got six years.

Should a computer algorithm play a role in taking away a person's liberty? Maybe, but certainly not the way it happened for poor Eric Loomis.

Risk assessments are valuable in court. It is very helpful to know how likely a person is to reoffend in the future so the judge can craft a sentence to reduce that likelihood. But how are these glimpses into the future accomplished?

Risk assessments are actuarial tools. They take data about a particular offender and run it through an algorithm, which then provides a level of risk. Therefore, the validity and reliability of a risk assessment depends entirely on the types of data analyzed and the weighting and analysis of that data, or how the algorithm is designed. Which leads us to the serious problem in Eric Loomis's case.

The court that sentenced Mr. Loomis used a commercial risk-assessment tool. So the algorithm at its heart was proprietary: meaning it was secret. As the Wisconsin Supreme Court observed, "The proprietary nature of [the tool] has been invoked to prevent disclosure of information relating to how factors are weighed or how risk scores are to be determined."

Eric Loomis's loss of liberty was caused, in part, by artificial intelligence whose core inner workings he could not access, analyze, or understand and therefore could not challenge.

Traditional risk assessments in court are undertaken not by software, but by forensic psychiatrists. These professionals assess offenders using multiple methods that include actuarial risk tools. The crucial difference from Eric Loomis's case is that the forensic psychiatrists appear in court to testify. They are accountable for the tools used, the way the tools themselves are constructed, how they are deployed and the demonstrated validity (or otherwise) of their results.

In other words, traditional risk-assessment methods offer the person whose liberty is at stake every opportunity to challenge the actuarial conclusions that could result in more time in jail. When the stakes are so high, fairness and due process demand nothing less.

But forensic psychiatrists are expensive, often prohibitively so. The savings to the justice system achievable by dispensing with forensic psychiatrists and simply running a bunch of offender data through an algorithm are no doubt enormous. Furthermore, it makes perfect sense from a business and knowledge-advancement perspective to protect the secrecy of proprietary software. Without secrecy protection, product developers cannot realize any return on research and development investments, so R&D will not happen and we will all be the poorer for it.

Introducing AI into the justice sector appears to present a collision of two legitimate value sets: the commercial imperatives of secrecy and return on investment and the justice imperatives of openness and due process. Are we stuck with a choice between the two?

No. We can pursue the revolutionary potential of AI to enhance justice processes without undermining essential justice safeguards. Justice is a core public service and in economic terms, a public good. It is not just another playground for profit-maximizing economic activity. Therefore, it is incumbent on the public, not private, sector to fund and develop AI tools to enhance justice processes. Since the public sector has no business need to keep commercial secrets, the inner working of those AI tools can be disclosed, and thus fully and independently challenged in court. The result will be more comprehensible and thus more acceptable AI-aided verdicts.

The Wisconsin Supreme Court misused artificial intelligence, permitting commercial imperatives to drive a sentencing outcome at the expense of fairness. Profit trumped justice: a serious mistake. A society dazzled by AI that sacrifices openness and due process in court on the altar of commercial secrecy has lost its moral compass. Just ask Eric Loomis, ticking off endless days on his prison calendar without the slightest idea why.

Interact with The Globe