Interpretable AI models for judicial decisionmaking: beyond explicability towards legal due process

Authors

  • Rodrigo L. Canalli Artificial Intelligence Governance and Regulation Laboratory (LIA) of the Brazilian Institute of Education, Development and Research (IDP) https://orcid.org/0000-0002-4121-1395

Keywords:

Due Process; Explicability; Interpretability; Legal Reasoning; Opacity

Abstract

As AI algorithms are employed to apply legal rules in determining rights and obligations, questions related to the observance of due legal process arise. The development of opaque machine learning models, whose predictions cannot be satisfactorily explained, has spurred debates around the idea of explainability of AI models for decision-making. The article argues that (i) in relation to AI models for judicial decision-making, the standard of explainability, besides proving insufficient to meet the requirements for publicity and reasoning of judicial decisions, imposes a form of nakedness not required of human judges; and (ii) a more appropriate standard would be that of interpretable models for judicial decision-making, characterized as able to offer decisions that are referred to current law (legality), internally and externally coherent (consistency), and compatible with the decision of a human judge in a similar case.

Downloads

Published

09-05-2024