tems should be designed with human
needs in mind,” the report states. That
means they first need to determine the
biggest pain points for caseworkers,
and the individuals and families they
serve. Issues to factor in are what are
the most complex processes; can they
be simplified; what activities take the
most time and whether they can be
streamlined, the report suggests.
Use of these systems is in the early
stages, but we can expect to see a growing number of government agencies
implementing AI systems that can automate social services to reduce costs
and speed up delivery of services, says
James Hendler, director of the Rensselaer Institute for Data Exploration and
Applications and one of the originators
of the Semantic Web.
“There’s definitely a drive, as more
people need social services, to bring
in any kind of computing automation
and obviously, AI and machine learning are offering some new opportunities in that space,” Hendler says.
One of the ways an AI system can be
beneficial is in instances in which some-
one seeking benefits needs to access
cross-agency information. For example, if
someone is trying to determine wheth-
er they can get their parents into a gov-
ernment-funded senior living facility,
there are myriad questions to answer.
“The potential of AI and machine learn-
ing is figuring out how to get people to the
right places to answer their questions,
and it may require going to many places
and piecing together information. AI can
help you pull it together as one activity.”
One of the main, persistent prob-
lems these systems have, however, is
inherent bias, because data is input by
biased humans, experts say.
Just like “Murphy’s Law,” which
states that “anything that could go
wrong, will,” Oren Etzioni, chief ex-
ecutive officer of the Allen Institute
for Artificial Intelligence, says there’s
a Murphy’s Law for AI: “It’s a law of
unintended consequences, because
a system looks at a vast range of pos-
sibilities and will find a very counter-
intuitive solution to a problem.”
“People struggle with their own bi-
ases, whether racist or sexist—or be-
cause they’re just plain hungry,” he
says. “Research has shown that there
are [judicial] sentencing differences
based on the time of day.”
Machines fall short in that they have
no “common sense,” so if a data error
is input, it will continue to apply that
error, Etzioni says. Likewise, if there is
a pattern in the data that is objection-
able because the data is from the past
but is being used to create predictive
models for the future, the machine will
not override it.
“It won’t say, ‘this behavior is racist or
sexist and we want to change that’; on the
contrary, the behavior of the algorithm is
to amplify behaviors found in the data,”
he says. “Data codifies past biases.”
Because machine learning systems
seek a signal or pattern in the data, “we
need to be very careful in the applica-
tion of these systems,” Etzioni says. “If
we are careful, there’s a great potential
benefit as well.”
To make AI and machine learning
systems work appropriately, many cog-
nitive technologies need to be trained
and retrained, according to the De-
loitte report. “They improve via deep
learning methods as they interact with
users. To make the most of their invest-
ments in AI, agencies should adopt an
agile approach [with software systems],
continuously testing and training their
cognitive technologies.”
David Madras, a Ph.D. student and
machine learning researcher at the
University of Toronto (U of T), believes
if an algorithm is not certain of some-
thing, rather than reach a conclusion,
it should have the option to indicate
uncertainty and defer to a human.
Madras and colleagues at U of T
developed an algorithmic model that
includes fairness. The definition of
fairness they used for their model is
based on “equalized odds,” which they
found in a 2016 paper, “Equality of Op-
portunity in Supervised Learning,” by
computer scientists from Google, the
University of Chicago, and the Univer-
sity of Texas, Austin. According to that
paper, Madras explains, “the model’s
false positive and false negative rates
should be equal for different groups
(for example, divided by race). Intui-
tively, this means the types of mistakes
should be the same for different types
of people (there are mistakes that can
advantage someone, and mistakes that
can disadvantage someone).”
The U of T researchers wanted to ex-
amine the unintended side effects of
machine learning in decision-making
systems, since a lot of these models
make assumptions that don’t always
hold in practice. They felt it was im-
portant to consider the possibility that
an algorithm could respond “I don’t
know” or “pass,” which led them to
think about the relationship between
a model and its surrounding system.
“There is often an assumption in ma-
chine learning that the data is a repre-
sentative sample, or that we know exact-
ly what objective we want to optimize.”
That has proven not to be the case in
many decision problems, he says.
Madras acknowledges the difficulty
of knowing how to add fairness to (or
subtract unfairness from) an algorithm. “Firstly, unfairness can creep
in at many points in the process, from
problem definition, to data collection,
to optimization, to user interaction.”
Also, he adds, “Nobody has a great
single definition of ‘fairness.’ It’s a very
complex, context-specific idea [that]
doesn’t lend itself easily to one-size-fits-all solutions.”
The definition they chose for their
model could just as easily be replaced
by another, he notes.
In terms of whether social services
systems can be unbiased when the algorithm running them may have built-in biases, Madras says that when models learn from historical data, they will
pick up any natural biases, which will
be a factor in their decision-making.
“It’s also very difficult to make an
algorithm unbiased when it is operat-
ing in a highly biased environment;
especially when a model is learned
“Humans are better
than computers
at exploring those
grey areas around
the edges of problems.
Computers are
better at the
black-and-white
decisions in
the middle.”