Science - USA (2021-07-16)

(Antfer) #1
SCIENCE sciencemag.org 16 JULY 2021 • VOL 373 ISSUE 6552 287

IMAGE: LESLEY SANDLES/ALAMY STOCK PHOTO

I

n Just Algorithms, Christopher Slobogin
develops two careful and sustained com-
parative arguments in favor of automat-
ing the nature and duration of criminal
sentencing in the United States along
with principles to govern the machinery
of this automation. The primary purpose
of the book is to counter recent arguments
against automated jurisprudence,
especially those made concerning
potential biases inherent in com-
putational risk assessment instru-
ments (RAIs) ( 1 – 5 ).
In the book’s first chapter,
“Rationale,” Slobogin argues that
attorneys, mental health profes-
sionals, judges, and parole board
members frequently impose and
enforce harmful legal decisions
and that, while computational
simulations are imperfect, they
are increasingly valid, fair, and just pre-
dictors of an individual’s future criminal
tendencies. In support of this assertion, he
summarizes his 2015 paper ( 6 ), in which he
claimed that core elements of US culture,
such as individualism, an adversarial justice
system, the “laissez-faire, winner-or-loser
ethos” of the country’s political economy,

SOCIAL SCIENCE

Better algorithms are key to reducing bias in criminal


sentencing, argues a legal scholar


By Michael Spezio

BOOKS et al.


Automating incarceration


the punitive character of “fundamental-
ist Christians,” and historic racism are so
entrenched that they will stymie all efforts
designed to reduce incarceration and end ra-
cial bias in the US criminal justice system.
The book’s central chapters—“Fit,” “Validity,”
and “Fairness”—specify and support the le-
gal principles that must govern the design,
implementation, auditing, and revision of al-
gorithms used in automated jurisprudence.
In “Fit,” Slobogin describes
the criteria an RAI must meet to
conform to the law’s specification
of risk. These include the ability
to provide a “more-likely-than-
not standard” of the probability
of risk that an individual will
commit a specific set of actions
within a given time period rela-
tive to his or her assigned group,
in the absence of any legal con-
straints such as house arrest or
incarceration. RAIs that could
meet the specificity criterion would be
those trained to predict only “conviction for
the most serious violent offenses.”
In his chapter on fairness, Slobogin con-
cedes that allegations that RAIs violate the
Equal Protection Clause of the 14th Amend-
ment have “more than a grain of truth.” He
argues, however, that RAIs that are fit and
valid will not discriminate on the basis of
race or other constitutionally protected cat-
egories, even if their use “has a disparate
racial or sex-based impact.”

To ensure that RAIs are applied fairly un-
der the 14th Amendment, Slobogin argues
that they must be used to estimate risk only
by simulating future violent crimes, not fu-
ture missed court dates, other types of mis-
demeanors, or nonviolent felonies. He also
advocates the use of “race-conscious calibra-
tion...which involves creating different al-
gorithms for different ethnicities or races.”
Slobogin’s primary argument in favor of
RAI-based sentencing appears in his pen-
ultimate chapter, “Structure.” Here, he con-
trasts the advantages of preventive justice,
wherein an individual is incarcerated on
the assumption that they would otherwise
represent a future violent threat to soci-
ety, with the many disadvantages of “just
desert” theory, in which incarceration is
intended only to punish an individual for
past crimes. The crux of his argument is
that “both desert and risk are crucial con-
siderations in fashioning sentences in indi-
vidual cases, and arguably are the principal
considerations in that context.” RAIs, he
maintains, will increase fairness in plea bar-
gaining and charging decisions and “should
trigger more oversight of a post-conviction
process that has long been ignored.”
Ultimately, however, the book does little
to allay concerns about the inherent po-
tential of algorithms to perpetuate racial
discrimination in criminal sentencing, pri-
marily because Slobogin tends to minimize
the potential effects of systemic racism on
the probability that one will be implicated
in a violent crime. Biased data will always
result in processes that are resistant to at-
tempts to achieve algorithmic fairness.
Nonetheless, Just Algorithms is one of
the first in-depth, systematic legal argu-
ments in favor of automating justice that
considers legal and scientific aspects of
criminal punishment via the simulation of
recidivism. As such, the book is necessary
reading for anyone seriously interested in
criminal justice reform and the ethical, le-
gal, and social implications of applying data
science technologies in judicial contexts. j

REFERENCES AND NOTES


  1. T. L. Fass, K. Heilbrun, D. DeMatteo, R. Fretz, Crim. Justice
    Behav. 35 , 1095 (2008).

  2. J. Kleinberg, H. Lakkaraju, J. Leskovec, J. Ludwig,
    S. Mullainathan, “Human Decisions and Machine
    Predictions” (NBER Working Paper W23180, Social
    Science Research Network, 2017).

  3. J. Dressel, H. Farid, S c i. A d v. 4 , eaao5580 (2018).

  4. J. Skeem, C. Lowenkamp, Behav. Sci. Law 38 , 259
    (2020).

  5. S. Xue, M. Yurochkin, Y. Sun, Proc. Mach. Learn. Res. 108 ,
    4552 (2020).

  6. C. Slobogin, Howard Law J. 58 , 317 (2015).


10.1126/science.abj3268

Algorithms are imperfect, concedes Slobogin, but
could be refined to reduce bias in the justice system.

Just Algorithms
Christopher Slobogin
Cambridge University
Press, 2021. 182 pp.

The reviewer is a faculty member of the Psychology,
Neuroscience, and Data Science programs at Scripps
College, Claremont, CA 91711, USA.
Email: mspezio@scrippscollege.edu

0716Books.indd 287 7/9/21 4:54 PM

Free download pdf