New Zealand Listener – June 08, 2019

(Tuis.) #1

42 LISTENER JUNE 8 2019


G
ET


TY


IM


AG


ES


H


ere’s the good news about our government
departments’ use of artificial intelligence
(AI): they haven’t massively screwed up,
so far. But they could if they continue
with what researchers describe as a “piecemeal”
approach to the use of computer algorithms to
automate systems and, more importantly, make
predictions that could affect our lives.
There’s a lot of hype about predictive algorithms,
but they are nothing new to govern-
ment. The RoC*RoI predictive model,
developed by Department of Justice
senior psychologists more than 25 years
ago, is used to predict criminals’ risk of
reconviction and risk of imprisonment.
It is fairly uncontroversial. A com-
puter algorithm, fed basic social and
demographic details and an individual’s
criminal history, consistently outper-
forms the judgments of trained experts
when it comes to picking whether
a person is likely to reoffend or end up back in
prison.
But as the mountain of data we generate grows,
the Government is naturally tempted to break
down the departmental silos in which information
has traditionally been kept and put it all together to
give a more complete picture of each person.
We shouldn’t necessarily fear that. The data-
driven approach, Bill English’s “social investment”,
could lead to better provision of taxpayer services,
better-functioning health and social welfare sys-
tems and the fairer sharing of wealth.
But as a group of University of Otago researchers
note in a newly released Law Foundation-funded
report, we need a stronger focus on “accuracy,

transparency, control and bias” when
designing and deploying algorithms
across government.
A stocktake by the Department
of Internal Affairs and Statistics NZ
last year revealed that 32 algorithms
were being used in 14 government
agencies. They include Neet (Not in
Education, Employment or Training),
the Ministry of Social
Development’s predictive
modelling tool to identify
school leavers who may
be at greater risk of long-
term unemployment, and
ACC’s system for automat-
ing compensation claims
that leaves staff only a few
to process manually.
So far, so good. But
there’s no independent
audit of how accurate these algo-
rithms are and whether they are less
biased than the humans who would
otherwise be making decisions about
us. There’s no system to question the
outcome of an algorithm-generated
decision to see what factors were
considered.
We need to get our house in order.
But we have an advantage here.
“Unlike some countries that use
commercial AI products, we’ve tended
to build our government AI tools
in-house, which means we know
how they work,” says the report’s

co-author, James Maclaurin, an Otago
philosophy of science professor.
“That’s a practice we strongly recom-
mend our Government continues.”

T


here are numerous overseas
examples of the troubling results
that arise when private compa-
nies are contracted by government
departments to apply their propri-
etary algorithms. The most famous
example is Compas, the system used
by US judges to determine bail and
sentencing limits.
An investigation by journalists at
non-profit news organisation ProPub-
lica in 2016 of the system’s automatic
risk assessments revealed Compas
was biased against African Americans.
Northpointe, the company that built
Compas, defended its algorithm, but
the system’s inner workings are kept
secret to protect the company’s inter-
ests, so it is still unclear how it works.
The Otago researchers argue for
transparency into algorithms used in
the public arena and propose a register
of predictive algorithms used by the
Government, complete with layper-
son summaries of how they work and
whether they have been bought com-
mercially or developed in-house.

But they go further, calling for
the formation of an independ-
ent regulator, akin to the Privacy
Commissioner, to oversee use of algo-
rithms in government. It is a sensible
proposal given how much we have to
lose if public trust is undermined.
By sorting out the Government’s
use of algorithms, we may also
identify an approach to the bigger
problem: the use of algorithms by
Google, Facebook and the plethora
of companies that collect and mine
our data every day, using methods
cloaked in secrecy. That’s where sun-
light really needs to be applied. l

We’ve got


algorithm


Our use of AI for


problem-solving


is okay so far, but


the Government


needs to do better.


by Peter Griffin


TECHNOLOGY


“Unlike some


countries, we’ve
tended to build our
AI tools in-house,

which means we know
how they work.”

James Maclaurin

THIS LIFE

Free download pdf