More to Learn
About Machine Learning
In their Viewpoint “Learning Machine
Learning” (Dec. 2018), Ted G. Lewis
and Peter J. Denning used a Q&A format to address machine learning and
neural nets but, in my view, omitted
two fundamental and important questions. The first is:
Q. Is machine learning the best way
to get the most reliable and efficient
solution to a problem?
A. Not generally.
To explain my answer, I need a definition of “machine learning.” Machine
learning is a machine collecting data
while providing service and using the
data to improve the speed or accuracy of
the service. This is neither new nor unusual. For example, a search program
can reorder its search list to move the
most frequently requested items toward
the top of the list. This improves performance until there is a major change
in the probability of the items being
requested. When this happens, performance may degrade until the machine
“learns” the new probabilities. Suggestions offered by a search engine are also
based on data collected while serving
users; the search engine uses the data to
“learn” what users are likely to ask.
When machine learning is used to
“discover” an algorithm, it may find a
local optimum, or an algorithm that is
better than similar algorithms but very
different from a much better one. A human who took the time to understand
the situation might find that algorithm. Machine learning is often a lazy
programmer’s way to solve a problem.
Using machine learning may save the
programmer time but fail to find the
best solution. Further, the trained network may fail unexpectedly when it encounters data radically different from
its training set.
The second Q&A pair Lewis and
Denning should have addressed concerns “neural networks”:
Q. If developers have constructed
(or simulated) a physical neural net-
A S I READ the specialsection on the China Region (Nov. 2018), I thought privacy in China deserved bet- ter treatment than was
expressed in the section’s foreword
“Welcome to the China Region Special Section” by co-organizers Wenguang Chen and Xiang-Yang Li, that
“People in China seem less sensitive
about privacy.” It sounded almost
identical to what Robin Li, CEO and
co-founder of Baidu, said in a talk at
the March 2018 China Development
Forum that was not well received by
China’s Internet users.
2
A March 2018 survey of 100,000
Chinese households by CCTV and Ten-cent Research reported 76.3% of participants view AI as a threat to privacy.
1
Other global privacy surveys, including
one by KPMG, reported privacy awareness in China as far more prevalent
than the authors seemed to imply.
One of the few critical notes in the
special section came near the end of the
Elliott Zaagman’s article “China’s Computing Ambitions” when it called the
lack of (Western-style) legal protections
and transparency “a real concern.” This
was followed by a quote on the weaknesses of more-open digital societies. When
lack of privacy rights was mentioned
elsewhere in the special section, it was
described as “an accepted observation.”
Feng Chucheng of risk-analysis firm
Blackpeak, said, “Rather than simply
reflecting [the status quo] that privacy
protections are not well-developed in
this society, [Baidu] should be leading
the charge to improve privacy rights.”
2
Perhaps the professors and analysts
who contributed articles to the section should have tried to do the same.
It would not have detracted from the
quality of their articles.
The “West” itself shows signs of mov-
ing toward being a surveillance society,
and no amount of “privacy rights” will
change that historical direction. More
than a few Western governments are
actually envious of China’s unique ap-
plications of technology in society. We
should be suspicious of government
agencies and regulators redefining pri-
vacy or downgrading it or citing nation-
al security to make such applications fit
their agenda. A similar observation can
be made about privately run corpora-
tions as well, especially social networks.
Articles and columns in
Communications should include, along with
technological achievement, considerations on how they might be abused
and the lessons that should be learned
when they are. It would mean extra
work for every author, as well as increased reader skepticism, but would
surely increase awareness.
As a New Year’s resolution, I respectfully invite everyone to read or reread the ACM Code of Ethics and Professional Conduct ( https://www.acm.
org/code-of-ethics), especially sections
1. 1, 1. 2, and 1.6, and incorporate it into
their research and professional practice, especially those with authority
and influence—or who publish in its
leading publication.
References
1. Hersey, F. Almost 80% of Chinese concerned about
AI threat to privacy, 32% already feel a threat to
their work. TechNode (Mar. 2, 2018); https://technode.
com/2018/03/02/almost-80-chinese-concerned-ai-
threat-privacy-32-already-feel-threat-work/
2. Li, R. Are Chinese people ‘less sensitive’ about
privacy? Sixth Tone (Mar. 27, 2018); http://www.
sixthtone.com/news/1001996/are-chinese-people-
less-sensitive-about-privacy%3F
Vincent Van Den Berghe,
Leuven, Belgium
Response from the Editor-in-Chief
Van Den Berghe’s letter raises a good
point—that articles discussing technology
can and should be enriched by discussion
of their societal context, including potential
abuses. I am pleased to see this topic being
raised in the context of the China Region
special section and believe it applies much
more broadly, both globally and across
a variety of topics. This is an important
challenge to Communications authors. I am
sure they will rise to it.
Andrew A. Chien, Chicago, IL, USA
Between the Lines in the
China Region Special Section
DOI: 10.1145/3302011