form legal discovery work faster and
more accurately than humans.
“In a high crime city, a judge might
start to hand out harsher sentences towards the upper end of the sentencing
guidelines. In court, if a judge does not
like one of the lawyers, that can affect
the judge’s opinion,” says Greenwood.
The argument is that machines
could potentially analyze facts and influence judgments dispassionately,
without human bias, irrationality, or
mistakes creeping into the process.
For instance, the Japanese bar exam
AI developed by Goebel and his team is
now considered “a world leader in the
field,” according to CBC. It succeeded
on the exam where at least one human failed: one of Goebel’s colleagues
failed the Japanese bar exam.
Human fallibility is not an isolated
problem in the legal field. According
to an investigation by U.K.-based newspaper The Guardian, local, state, and
federal courts in the U.S. are rife with
judges who “routinely hide their connections to litigants and their lawyers.”
The investigation learned that oversight bodies found wrongdoing and issued disciplinary action in nearly half
(47%) of complaints about judge conflict of interest they investigated.
However, oversight bodies rarely
look into complaints at all—90% of over
37,000 complaints investigated were
dismissed by state court authorities
“without conducting any substantive
inquiry,” according to the investigation.
Conflict of interest is not the only
human bias that plagues the U.S. legal
system; racial bias, explicit or implicit,
also is common.
“Minorities have less access to the
courts to begin with, and tend to have
worse outcomes due to systemic factors limiting their quality of representation, and subconscious or conscious
bias,” says Oliver Pulleyblank, founder
of Vancouver, British Columbia-based
legal firm Pulleyblank Law.
Intelligent machines, however, do
not carry the same baggage. Acting as
dispassionate arbiters looking at “just
the facts,” machines hold the potential
to influence the legal decision-making
process in a more consistent, standardized way than humans do.
The benefits would be significant.
“To introduce a system with much
greater certainty and predictability
would open up the law to many more
people,” says Pulleyblank. The high
cost and uncertain outcomes of cases
discourage many from pursuing valid
legal action.
“Very few people can afford to litigate matters,” says Pulleyblank, “even
those who can generally shouldn’t, because legal victories are so often hollow
after all the expenses have been paid.”
However, when you look more deeply at machine-assisted legal decisions,
you find they may not be as impartial or
consistent as they seem.
“Unbiased” Machines
Created by Biased Humans
In the Loomis algorithm-assisted
case, the defendant claimed the algorithm’s report violated his right to
due process, but there was no way to
examine how the report was generated; the company that produces the
Compas software containing the algorithm, Northpointe, keeps its workings under wraps.
“The key to our product is the algorithms, and they’re proprietary. We’ve
created them, and we don’t release
them because it’s certainly a core piece
of our business,” Northpointe executives were reported as saying by The
New York Times.
This is the so-called “black box”
problem that haunts the field of artificial intelligence.
Algorithms are applied to massive datasets. The algorithms produce results based upon their “secret
sauce”—how they use the data. Giving
up the secret sauce of an algorithm is
akin to giving up your entire competitive advantage.
The result? Most systems that use
AI are completely opaque to anyone
except their creators. We are unable
to determine why an algorithm pro-
duced a specific output, recommenda-
tion, or assessment.
This is a major problem when it
comes to using machines as judge and
jury: because we lack even the most
basic understanding of how the algorithms work, we cannot know if they
are producing poor results until after
the damage is done.
ProPublica, an “independent, nonprofit newsroom that produces investigative journalism with moral force,”
according to its website, studied the
“risk scores,” assessments created
by Northpointe’s algorithm, of 7,000
people who were arrested in Broward
County, FL. These scores are used to
determine release dates and bails in
courtrooms, as they purportedly predict the defendant’s likelihood to commit crime again.
As it turns out, these algorithms
may be biased.
In the cases investigated, ProPublica says the algorithms wrongly
labeled black defendants as future
criminals at a rate nearly twice that
of white defendants (who were mislabeled as “low risk” more often than
black defendants).
Because the algorithms do not operate transparently, it is difficult to tell if
this was an assessment error, or if the
algorithms were coded with rules that
reflect the biases of the people who created them.
In addition to bias, the algorithms’
predictions just are not that accurate.
“Only 20% of the people predicted
to commit violent crimes actually went
on to do so,” says ProPublica. Fewer
violent crimes committed is a good
thing, but based on this assessment,
decisions were made that treated 80%
of defendants as likely violent criminals when they were not.
Critics claim that algorithms
need to be far more transparent before they can be relied on to influence legal decisions.
Even then, another huge problem
with having AI take on a larger role
in the legal system is that there is
no guarantee machines can handle
the nuances of the law effectively,
says Pulleyblank.
“Many legal problems require judges to balance distinct interests against
“To introduce
a system with much
greater certainty and
predictability would
open up the law to
many more people.”