Pas Besoin de Voir Dire
Artificial intelligence (AI) offers tremendous potential and, at the same time, apprehension of how it can be used. I foresee one use of AI that could conceivably end a centuries-old system. That system is the use of 12 jurors in a criminal or civil matter, the jury deciding the questions of fact. To be clear, I’m only going to touch on the system of deciding upon guilt or innocence based on evidence presented. I exclude sentencing and other aspects of the justice system.
The use of 12 ordinary citizens with knowledge of a case to determine the facts and provide a verdict dates back to Medieval England. The modern jury, whereby citizens assess evidence presented in court, dates to the 13th century. That’s a system long-entrenched in our society. I don’t expect we’ll let it go quietly in favour of a new system. But AI could be a disruptor.
A radio was always on in our house in the 1970s and 1980s. I listened to the news. Some stories in that era I followed closely. In particular, I heard about three individuals in the public eye, none wanting to be there. The three men were David Milgaard, Donald Marshall and Guy Paul Morin. Each was convicted of a crime. Years later, each had their convictions overturned when advocates successfully argued to have evidence re-examined or new evidence assessed. Ontario forensic child pathologist Dr Charles Smith was often in the news in my early adult life. The flawed evidence presented by Dr Smith at numerous trials led to many wrongful convictions.
For me, AI offers hope that flawed evidence in a trial gets recognized as defective at the time of the trial. For me, AI offers hope that evidence gets judged impartially, free from racism. For me, AI offers hope that the right questions and answers in a case determine the verdict.
I often hear, “I don’t trust AI: it makes mistakes.” I can’t argue against this assertion. But I do argue that humans make mistakes. Trusting entirely in a human’s judgement is flawed in my opinion, especially when that judgement applies to another individual. And so one aspect of AI that fascinates me in this application of jury replacement is the large language model that would underpin the AI engine. LLMs rely on a corpus, a collection of data. The quality of the AI engine depends on the quality of the data. One problem with LLMs today is the use of data that is inaccurate or biased. Could we fabricate an AI engine to hear evidence in a trial and be confident that the engine would make its decision free from bias and incorrect data? The short answer is probably no: the corpus is likely flawed. But perhaps we can leverage future generations of AI to weed out biases and correct inaccuracies. We would then be more confident in our LLM and consequently in the verdict from an AI jury.
Finally, a verdict is binary: guilty or not guilty. But what if our AI verdict could also offer a confidence level in the verdict, something our 12-juror system doesn’t? Wouldn’t a 51% probability of guilt or innocence give us more to think about than a 95% probability? How we use that additional information might be quite a challenge.
In closing … Lowell Green was born in 1936 and died this past weekend, February 14, 2026. Mr. Green was a long-time Ottawa broadcaster. I listened to him throughout the 1970s on CFRA. Ken “The General” Grant (1935–2023) got me up and ready for school each morning during this same period. I never heard any broadcasters later on like the ones I listened to when I was twelve. Jesus, does anyone?