Karen Eltis is a law professor at the University of Ottawa
In the disruptive digital age, the foremost challenge for policy-makers, industry and institutions is to mindfully apply the countless benefits of artificial intelligence (AI) to decision-making, while avoiding its unintended consequences.
In particular, new technologies tend to bypass not only institutions but the legal system and its norms. This sidestepping of rules, for example, may inadvertently serve to entrench creeping bias and eventually grant it the appearance of infallibility, thus encouraging sloppiness or even affecting the vulnerable, be it in justice or health. Accordingly, we must find balance between allowing decision makers (judges, health-care providers and beyond) on the one hand to harness the great advantages that machine learning offers toward safety and efficiency and, on the other, ensuring that algorithms do not serve as a crutch, insidiously replacing nuanced, all-too-human decision-making.
Moreover, in seeking to formulate policy that advances constitutional values while preserving innovation, the broader ramifications of cyberspace’s extraterritoriality must also be heeded. Therefore, in the absence of legal clarity, corporate actors are tasked with traditional adjudication more broadly. Judging is contextual and, to a great extent, cultural. It is, above all, human. The markedly “borderless” digital realm and its algorithms, however unintentionally, tend to decontextualize and oftentimes distort decision-making.
What then must we do? Helpful in this context is Daniel Kahneman’s “think fast/think slow” paradigm. Applied to AI, it can serve to distinguish between cases where technology assists the decision maker with “easy cases” (those that require that an existing rule merely be applied), thereby relieving deciders (such as courts, executives or health-care providers) from unnecessary burdens. These must be distinguished from “hard cases” that demand a significant use of discretion in light of their complexity and social ramifications.
In the latter instances, AI cannot be allowed to usurp decision makers’ judgment, thereby not only decontextualizing and misleading but entrenching bias. This offends the Canadian Charter of Rights and Freedoms and other legal instruments, cultivated so painstakingly over the postwar period.
To better illustrate “fast” versus “slow” cases relevant to industry, the following examples are telling. As our consumer habits evolve, AI can serve to accelerate choices such as selection of effective packaging for a given shipment, thereby reducing transportation costs and energy consumption. Going a step further, in medicine, it may help save lives by quickly screening and accelerating diagnosis for necrotizing fasciitis, or analyzing the signs of stroke, where time is of the essence for directly affecting patient outcome.
More nuanced choices, where context is key to diagnosis, require allowing the decider to exercise discretion, ensuring that judgment is not unnecessarily constrained by “prepackaged” choices or unknowingly mired in problematic bias based on prohibited grounds. Such cases may include screening a child who presents with skin marks that may be attributed to causes ranging from coagulation issues to leukemia to abuse. It can also include what content may be de-indexed from social media or deemed hateful, violent and worthy of suppression for public safety.
Simply put then, with an eye toward framing future uses of AI in line with the human-rights paradigm and the preservation of democratic values and our own critical thinking, we must distinguish between instances that, in the words of Justice Aharon Barak, former president of the Supreme Court of Israel, regarding judicial discretion, require the decider “to choose between one or more lawful alternatives” and those where “mechanical application of clear rules are possible.”
In the situations where a single solution is available, technology may indeed have a useful role to play. It may perhaps even serve to increase efficiency, accuracy and access, and to mitigate the cognitive biases so prevalent in Prof. Kahneman‘s model of “thinking fast” or emotionally, while leaving the “slower” thinking to accountable (and human) decision makers.