Empowering the user requires a complete rethinking of the role of the user
in the digital society. The user is no
longer a passive consumer of digital
technologies and a data producer for
them. Her dignity as a human being
implies ownership of personal data
and freedom of making responsible
decisions. Autonomous technologies
shall be designed and developed to
respect it. This lifts the user to become an independent actor in the
digital society able to properly interact with the autonomous technologies she uses every day and equipped
with the appropriate digital means.
The separation of digital ethics
in hard and soft ethics suggests that
hard ethics is what the autonomous
system shall comply with while soft
ethics is specific to each individual/
user. To obey the principle of human
dignity the system during its interactions with each individual shall
not violate her soft ethics. The autonomous system architecture shall
permit this interaction to happen
by complying with the user’s moral
prerogatives and capabilities. Users
need to be able to verify the system
they use by possibly imposing on
them their own ethical requirements.
The separation of concerns implied
by the above notion of digital ethics suggests an overall framework in
which the autonomy of the system
is delimited by hard ethics requirements, users are empowered with
their own soft ethics, and the interactions between the system and each
user are further constrained by their
soft ethics requirements. Therefore,
the capability of an autonomous system to make decisions does not only
need to comply with legislation but
also with a user’s moral preferences.
(See the intersection between soft
and hard ethics in the accompanying
figure.)
In such a framework, it should
also be possible to deal with liability
issues in a fine-grained way by distributing responsibility between the
system and the user(s) according to
hard and soft ethics. The envisioned
framework requires several steps.
On the ethics side, provided that au-
tonomous systems will be developed
by complying with hard ethics that is
with the regulations, the crucial issue
to face is to respect each individual’s
soft ethics. If verifying the compli-
ance of autonomous systems to hard
ethics is already raising huge scien-
tific interest and great worries (given
the use of obscure AI techniques),
1, 2, 14
defining the scope of soft ethics and
characterizing individual ones is a
daunting task. Indeed, neither a per-
son nor a society applies moral cat-
egories separately. Rather, everyday
morality is in constant flux among
norms, utilitarian assessment of con-
sequences, and evaluation of virtues.
Nevertheless, a digital society that
fully realizes the principle of human
dignity shall allow each individual to
express her soft ethics preferences.
Further challenges concern means to
consistently combine user soft ethics
with system hard ethics and to man-
age interactions of the system with
users endorsing different ethics pref-
erences. Autonomous systems shall
be realized by embedding hard eth-
ics by design but remaining open to
accommodate users’ soft ethics. This
could be achieved through system
customization or by mediating inter-
actions between the system and the
user, in any case through rethinking
the system architecture.
Building systems that embody
ethical principles by design may also
permit acquiring a competitive ad-
vantage in the market, as predicted
in the recent Gartner Top 10 Strategic
Technology Trends for 2019.23
Computer scientists alone cannot solve the scientific and technical
challenges we have ahead. A multi-disciplinary effort is needed that calls
for philosophers, sociologists, law
specialists, and computer scientists
working together.
Acknowledgments. The author is indebted to the multi-disciplinary team
of the EXOSOUL@univaq project
( http://exosoul.disim.univaq.it) for
enlightening debates and joint work
on digital ethics for autonomous systems.
References
1. ACM U. S. Public Policy Council. Statement on
algorithmic transparency and accountability, 2018;
https://bit.ly/2j4IJEV.
2. Larus, J. et al. When Computers Decide: European
Recommendations On Machine- Learned Automated
Decision Making, 2018; https://dl.acm.org/citation.
cfm?id=3185595.
3. EDPS. Opinion 4/2015: Towards a new digital ethics—
data, dignity and technology; https://edps.europa.eu/
sites/edp/files/publication/15-09-11_data_ethics_
en.pdf.
4. EDPS. Leading by example, The EDPS Strategy
2015–2019; https://bit.ly/2MpegjJ
5. European Group on Ethics in Science and
New Technologies. Statement on artificial
intelligence, robotics and ‘autonomous’ systems;
https://ec.europa.eu/research/ege/pdf/ege_ai_
statement_2018.pdf
6. Burgess, J.P., Floridi, L., Pols, A. and van den Hoven,
J. Towards a digital ethics— EDPS ethics advisory
group; https://edps.europa.eu/sites/edp/files/
publication/18-01-25_eag_report_en.pdf
7. Floridi, L. Soft ethics and the governance of the
digital. Philosophy & Technology 31, 1 (Mar. 2018),
1–8.
8. The DECODE project; https://decodeproject.eu.
9. Contissa, G., Lagioia, F. and Sartor, G. The ethical
knob: Ethically customisable automated vehicles and
the law. AI and Law 25, 3 (2017), 365–378.
10. Gogoll, J. and Müller, J.F. Autonomous cars: In
favor of a mandatory ethics setting. Science and
Engineering Ethics 23, 3 (June 2017), 681–700.
11. Ethics Commission Automated and Connected
Driving. Appointed by the German Federal Minister
of Transport and Digital Infrastructure, June 2017
Report; https://bit.ly/2xx18DZ
12. Cath, C. et al. editors. Governing artificial intelligence:
ethical, legal, and technical opportunities and
challenges. Philosophical Transactions of the Royal
Society A: Mathematical, Physical and Engineering
Sciences. Royal Society, Nov. 2018.
13. Declaration on Ethics and Data Protection in Artificial
Intelligence at the 40th Intern. Conference of Data
Protection and Privacy Commissioners, Oct. 2018;
https://bit.ly/2Cz31AG.
14. AI Now Institute, New York University, 2017 Annual
Report; https://ainowinstitute.org/AI_Now_2017_
Report.pdf
15. AI Now Institute, New York University, 2018 Annual
Report; https://ainowinstitute.org/AI_Now_2018_
Report.pdf
16. Awad E. et al. The Moral Machine experiment. Nature
563 (Oct. 2018), 59–64.
17. Li, T. China’s influence on digital privacy could be
global; https://wapo.st/2TffDE0
18. Vardi, M. Are we having an ethical crisis in
computing? Commun. ACM 62, 1 (Jan. 2019), 7;
https://cacm.acm.org/magazines/2019/1/233511-
are-we-having-an-ethical-crisis-in-computing/fulltext
19. Nemitz, P. Constitutional democracy and technology
in the age of artificial intelligence. Philosophical
Transactions of the Royal Society A: Mathematical,
Physical And Engineering Sciences. Royal Society,
Nov. 2018.
20. Wakabayashi, D. California passes sweeping law to
protect online privacy. New York Times (June 28,
2018); https://nyti.ms/2tGjAaf.
21. The European Commission’s High-Level Expert Group
on Artificial intelligence, Draft Ethics guidelines for
trustworthy AI; https://ec.europa.eu/newsroom/dae/
document.cfm?doc_id=56433
22. Artificial Intelligence: A European Perspective.
European Commission Joint Research Centre,
Dec. 2018; https://ec.europa.eu/jrc/en/artificial-intelligence-european-perspective
23. Gartner Top 10 Strategic Technology Trends for
2019; https://gtnr.it/2CJJYGp
Paola Inverardi is a professor in the Department
of Information Engineering Computer Science and
Mathematics at the University of L’Aquila, L’Aquila, Italy.
Copyright held by author/owner.
Publication rights licensed to ACM. $15.00.