researchers stated. “However, what
we should be focusing on is design-
ing AI that delivers results that are in
line with peoples’ well-being. By ob-
serving human reactions to various
outcomes, AI could learn through
a technique called ‘cooperative in-
verse reinforcement learning’ what
our preferences are, and then work
towards producing results consistent
with those preferences.”
AI systems need to be held account-
able, says Alexandra Chouldechova,
an assistant professor of statistics and
public policy at Carnegie Mellon Uni-
versity’s Heinz College of Information
Systems and Public Policy.
“Systems fail to achieve their purported goals all the time,” Chouldechova notes. “The questions are:
Why? Can it be fixed? Could it have
been prevented in the first place?
“By being clear about a system’s in-
tended purpose at the outset, transpar-
ent about its development and deploy-
ment, and proactive in anticipating its
impact, we can hopefully reach a place
where there will be fewer adverse unin-
For the foreseeable future, Hen-
dler believes humans and computers
working together will outperform ei-
ther one separately. For the partner-
ship to work, a human must be able
to understand the decision-making of
the AI system, he says.
“We currently teach people to take
the data and feed it into AI systems to
get an ‘unbiased answer.’ That unbi-
ased answer is used to make predic-
tions and help people find services,”
Hendler says. “The problem is, the
data coming in has been chosen in
various ways, and we don’t educate
computer or data scientists how to
know the data in your database will
model the real world.”
This is certainly not a new prob-
lem. Hendler recalls the famous case
of Stanislov Petrov, a Soviet lieuten-
ant-colonel whose job was to monitor
his country’s satellite system. In 1983,
the computers sounded an alarm in-
dicating the U.S. had launched nu-
clear missiles. Instead of launching a
counterattack, Petrov felt something
was wrong and refused; it turned out
to be a computer malfunction. AI sci-
entists, says Hendler, should learn
“The real danger is people overtrusting these ‘unbiased’ AI systems,”
he says. “What I’m afraid of is most
people don’t understand these issues
… and just will trust the system the way
they trust other computer systems. If
they don’t know these systems have
these limitations, they won’t be looking for the alternatives that humans
are good at.”
Madras, D., Creager, E., Pitassi, T., and Zemel, R.
Learning Adversarially Fair and
Transferable Representations, 17 Feb. 2018,
Cornell University Library, https://arxiv.org/
Buolamwini, J. and Gebru, T.
Gender Shades: Intersectional Accuracy
Disparities in Commercial Gender
Classification, Proceedings of Machine
Learning Research, 2018, Conference on
Fairness, Accountability and Transparency.
Dovey Fishman, T., Eggers, W.D., and Kishnani, P.
AI-augmented human services: Using
cognitive technologies to transform
program delivery, Deloitte Insights, 2017,
Zhao, J., Wang, T., Yatskar, M.,
Ordonez, V., and Chang, K..
Men Also Like Shopping: Reducing
Gender Bias Amplification using Corpus-level Constraints, University of Virginia.
Proceedings of the 2017 Conference on
Empirical Methods in Natural Language
Processing, pages 2979–2989 Copenhagen,
Denmark, Sept. 7–11, 2017. https://pdfs.
Tan, S., Caruana, R., Hooker, G., and Lou, Y.
Auditing Black-Box Models Using
Transparent Model Distillation With
Side Information, 17 Oct. 2017, Cornell
Weapons of Math Destruction. 2016.
Crown Random House.
Hardt, M., Price, E., and Srebro, N.
Equality of Opportunity
in Supervised Learning
October 11, 2016
Esther Shein is a freelance technology and business
writer based in the area of Boston, MA, USA.
© 2018 ACM 0001-0782/18/10 $15.00
from historical data, the tendency
is to repeat those patterns in some
sense,” Madras says.
Etzioni believes an AI system can
be bias-free even when bias is input,
although that is not an easy thing to
achieve. An original algorithm tries to
maximize consistency with data, he
says, but that past data may not be the
“If we can define a criterion and
mathematically describe what it
means to be free of bias, we can give
that to the machine,” he says. “The
challenge becomes describing formal-
ly or mathematically what bias means,
and secondly, you have to have some
adherence to the data. So there’s really
a tension between consistency with the
data, which is clearly desirable, and be-
People are working so both consis-
tency and being bias-free can be sup-
ported, he adds.
For AI to augment the work of government case workers and make social
programs more efficient is to couple
the technical progress being made
with educating people on how to use
these programs, Etzioni says.
“Part of the problem is when a hu-
man just blindly adheres to the rec-
ommendations of the system without
trying to make sense of them, and the
system says, ‘It must be true,’ but if
the machine’s analysis is one output
and a sophisticated person analyzes
it, we find ourselves in the best of
AI, he says, really should stand for
“augmented intelligence,” where tech-
nology plays a supporting role, he says.
“Humans are better than com-
puters at exploring those grey ar-
eas around the edges of problems,”
agrees Hendler. “Computers are bet-
ter at the black-and-white decisions
in the middle.”
The issue of transparency of algo-
rithms and bias was discussed at a
November 2017 conference held by
the Paris-based Organization for Eco-
nomic Cooperation and Development
(OECD). Although several beneficial so-
cietal use-cases of AI were mentioned,
researchers said the solution lies in ad-
dressing system bias from a policy per-
spective as well as a design perspective.
“Right now, AI is designed so as
to optimize a given objective,” the