creasingly vary from each other. In addition
to presenting a lack of precision in legal
terminology, AI also involves an ever-growing number of stakeholders such as clients,
insurers, law firms, contractors and regulatory entities, who all have different—and often
While some researchers believe that the
implementation of abstract values into AI
systems is feasible, other scholars vehemently disagree that such an approach
is either feasible or desirable. Etzioni and
Etzioni [ 2] provide compelling arguments
to demonstrate this position. They assert that a significant part of the ethical
challenges posed by AI-equipped devices
can be addressed by law enforcement and
the personal choices of the users. Personal
choices governed by laws are examples
of ethical decisions made collectively and
enforced collectively. AI devices must
be programmed to observe these legal
constraints, which do not involve an ethical
decision for the device. For other more
nuanced decision-making, it will be very
difficult to program a moral code into an AI
device. Furthermore, attempting to automate ethical norms into AI systems will create questions about the social position that
AI-entities occupy in society. The future of
AI relies heavily on the quality and capabilities that scientific researchers and engineers
can deliver with particular emphasis on the
proven safety and reliability of systems as
they interact with humans. Examples such
as driverless cars, automated sentencing
recommendations to judges, smart voting
systems, and smart home appliances all
make great case studies to discuss with
students how ethics can be used to build or
can be built into such systems.
As they stand now, the EU
commission report and US NAIRD
plan are orthogonal documents
with each one emphasizing a
different approach to dealing with
the ethical concerns raised by AI
systems. The EU report focuses
on greater human responsibility
while the US report emphasizes
optimism in the power of technology to solve ethical problems. It
may be that both approaches will
be needed to regulate the legal
and ethical implications of AI—
greater oversight of human decision makers as well as ethical design and regulation
of AI empowered devices. In a subsequent
article we will discuss more fully the
concept of the “personhood” of robots as
well as the liability and responsibility issues
inherent in AI technologies.
1. Devalaux, M. (Rapporteur). European Union Report
with Recommendations to EU Commission on Civil
Law Rules on Robotics, January 2017; http://www.
EN. Accessed 22 August 2017.
2. Etzioni, A. and Etzioni, O. Incorporating Ethics into
Artificial Intelligence. Journal of Ethics (online issue).
March 7, 2017; https://doi.org/10.1007/s10892-017-
3. Kemp, R. Legal Aspects of Artificial Intelligence
(pp. 1-2, Rep.); http://www.kempitlaw.com/wp-
Branding-.pdf. Accessed 22 August 2017.
4. Moynihan, D. The Rise Of The Machines? Mondaq
Business Briefing. March 24, 2017.
5. Networking and Information Technology Research
and Development Subcommittee.
The National Artificial Intelligence Research and
Development Strategic Plan (October 2016), 26-7;
strategic_plan.aspx. Accessed 22 August 2017.
C. Dianne Martin
Professor Emeritus of Computer
Science and Vice Provost for
The George Washington University
Rice Hall – Suite 813
2121 I Street, NW
Washington, DC 20052 USA
Toma Taylor Makoundou
Graduate Student of Computer
Science, George Washington University, Washington, DC, 20052 USA
DOI: 10.1145/3148541 Copyright held by authors.
This quarterly publication is a
quarterly journal that publishes
refereed articles addressing issues
of computing as it impacts the
lives of people with disabilities.
The journal will be of particular
interest to SIGACCESS members
and delegrates to its affiliated
conference (i.e., ASSETS), as well
as other international accessibility