security policies, some of which are often more stringent than federal law.
So, we can dispense with the idea
that AI is not regulated. Regulatory advocates and concerned policymakers
might still be able to identify particular AI applications that present true
and immediate threats to society (such
as “killer robots” or other existential
threats) and which require more serious consideration and potential control. Government uses of profiling software for law enforcement falls under
this category, due to its capacity to violate established civil liberties.
But we should realize the vast majority of AI applications do not fit into
this bucket; for most AI applications,
the promised benefits far outweigh the
imagined danger, which can so seductively inflame our anxieties and lead to
The more sensible tone and policy
disposition for AI was nicely articulat-
ed by The One Hundred Year Study on
Artificial Intelligence,n a Stanford Uni-
versity-led project that brought togeth-
er 17 of the leading experts to compile
a comprehensive report on AI issues.
“Misunderstanding about what AI is
and is not, especially against a back-
ground of scare-mongering, could
fuel opposition to technologies that
could benefit everyone. This would be
a tragic mistake,” they argued. “Regu-
lation that stifles innovation, or relo-
cates it to other jurisdictions, would
be similarly counterproductive.”
That is precisely the sort of humility
and patience that should guide our pub-
lic policies toward AI going forward. As
our machines get smart, it is vital for us
to make our policies even smarter.
Andrea O’Sullivan ( email@example.com) is the
former Technology Policy Program Manager at the Mercatus
Center at George Mason University, Fairfax, VA, USA.
Adam Thierer ( firstname.lastname@example.org) is a Senior
Research Fellow at the Mercatus Center at George Mason
University, Fairfax, VA, USA.
Copyright held by authors.
Research Center, more than half of
Americans say they would outright
refuse to ride in a driverless car. Why?
Many fear they cannot trust the software undergirding such technologies,
and believe the cars will be dangerous.
Furthermore, respondents to the Pew
poll indicated they do not believe driverless cars will have much of a positive impact on road safety, with 30%
reporting they believe road deaths
would increase, and another 31% saying they would probably remain about
the same. Yet our current human-operated system produces the equivalent of a massacre on the roads each
year. 2016 saw the highest number of
road fatalities in the past decade, with
40,000 needless deaths by human drivers. Put another way, 100 people were
killed by a human driver each day. Autonomous vehicles, on the other hand,
could reduce traffic fatalities by up to
90%.k This means the cost of delaying driverless car technologies due to
regulatory anxieties would mean tens of
thousands of needless deaths each year.
A Mercatus Center modell suggests a
regulatory delay of 5% could yield an
additional 15,500 needless fatalities. A
delay of 25% would mean 112,400 needless deaths. The difference between
regulatory humility and regulatory dithering could literally be the difference
between life and death for many.
A Better Path Forward:
Humility and Restraint
This illustration should not be construed as a call to “do nothing.” Rather,
it is meant to paint a picture of the real
potential cost of bad policy. Rather
than rushing to regulate in an attempt
to formalize safety into law, we should
first pause and consider the risks of
avoiding all risks.
In our recent research paper, “
Artificial Intelligence and Public Policy,”m
co-authored with Raymond Russell, we
outline a path forward for policymakers to embrace permissionless innovation for AI technologies. In general, we
˲ Articulate and defend permissionless innovation as the general policy
˲ Identify and remove barriers to entry and innovation.
˲ Protect freedom of speech and expression.
˲ Retain and expand immunities for
intermediaries from liability associated with third-party uses.
˲ Rely on existing legal solutions and
the common law to solve problems.
˲ Wait for insurance markets and
competitive responses to develop.
˲ Push for industry self-regulation
and best practices.
˲ Promote education and empowerment solutions and be patient as social
norms evolve to solve challenges.
˲ Adopt targeted, limited, legal measures for truly hard problems.
˲ Evaluate and reevaluate policy de-
cisions to ensure they pass a strict ben-
Of course, these recommendations
must be tailored to the kind of applica-
tion under consideration. Social me-
dia and content aggregation services
already enjoy liability protection under
Section 230 of the Communications De-
cency Act of 1996, but the question of li-
ability for software developers of auton-
omous vehicles is still being discussed.
In that regard, we should not forget
the important role the courts and common law will play in disciplining bad
actors. If algorithms are faulty and create serious errors or “bias,” powerful
remedies already exist in the form of
product defects law, torts, contract law,
property law, and class-action lawsuits.
Meanwhile, at the federal level, the
Federal Trade Commission already
possesses a wide range of consumer
protection powers through its broad
authority to police “unfair and deceptive practices.” Similarly, at the state
level, consumer protection offices
and state attorneys general also address unfair practices and continue
to advance their own privacy and data
benefits far outweigh
the imagined danger.
Watch the authors discuss
this work in the exclusive