bombardment has seldom worked,
“mass disruption” from cyber attacks on
infrastructure is even less likely to
achieve the desired psychological effects. Such attacks will kindle great rage
among those affected, leading to conflict escalation. In that larger conflict,
the side that has learned to use cyber at
the tactical level will prevail.
It may seem reassuring that the apparently Russia-friendly hacker
groups are focusing on infrastructure
targets, the implication being this
suggests an emphasis on developing
strategic, rather than tactical, cyberwar capabilities. But this is not an either-or situation. Aggressors might be
cultivating battlefield cyber capabilities as well. How might one tell? One
clue could be that infrastructure
probes and attacks to date have generally not used zero-day exploits; almost all have been simple, employing
watering-hole techniques (lying in wait
at frequented sites), man-in-the-middle
attacks (rerouting individuals’ Internet
traffic), and other basic methods. The
world’s cyber aggressors may have a
whole other gear we have not seen,
which will revealed in a shooting war.
It is this latter sort of militarized
conflict that David Ronfeldt and I envisioned when we wrote “Cyberwar Is
Coming!” ( http://bit.ly/2At Tlbt) a quar-ter-century ago. It is in its effects on the
course of battles—on land, at sea, in
the air, and outer space—that cyber
will show its true potential to transform warfare in the 21st century.
Cyberwar is not simply a lineal descendant of strategic air power; rather,
it is the next face of battle.
Are Likely Biased
April 23, 2017
Our campus has been having discussions about student evaluations of
teaching. Our Center for Teaching and
Learning circulated a copy of an article
by Carl Wieman from Change magazine,
“A Better Way to Evaluate Undergraduate Teaching” ( http://bit.ly/2ipatVy).
Wieman argues we need a better
way to evaluate teaching; student
evaluations do not correlate with de-
sirable outcomes (as described at
http://bit.ly/2iXrn17) and are biased.
“To put this in more concrete terms,
the data indicate that it would be nearly
impossible for a physically unattractive
female instructor teaching a large required introductory physics course to
receive as high an evaluation as that of
an attractive male instructor teaching a
small fourth-year elective course for
physics majors, regardless of how well
Wieman suggests a Teaching Practices Inventory ( http://bit.ly/2ioK5Le) as
a better way to evaluate undergraduate
teaching. Using practices that are evi-dence-based is likely to lead to better
outcomes. This hasn’t been an easy sell,
as Wieman discovered at the White
House Office of Science and Technology
Policy ( http://bit.ly/2B1giUo). It has not
gone over well on my campus, either.
Scholars like Nira Hativa argue student evaluations are an effective way to
recognize good teaching (see http://
amzn.to/2ingr94). Student evaluation of
teaching is easy, and is current standard
practice, which is difficult to change.
Wieman’s Teaching Practices Inventory
has been called “radical” on my campus.
I am not a scholar of studies about
student evaluation of teaching. I
study computing education. From
what I know about computer science
and unconscious bias, the quote from
Wieman is likely just as true in computer science.
Unconscious bias is a factor in women’s underrepresentation in STEM generally, and computer science specifically. The idea is that we all have biases that
influence how we make decisions. Unconsciously, many of us (at least in the
Western world) are biased to think computer scientists are mostly male. Unless
we consciously recognize our biases, we
are likely to express them in our decisions. A 2013 multi-institutional study
( http://bit.ly/2jUJj9p) found undergraduates see computer scientists as male.
That’s a source for bias.
Women in computer science (CS)
report on biases that keep them from
succeeding in computer science
( http://bit.ly/2BH6N9P). Studies show
female science students are more
likely to be interrupted and less likely
to get instructors to pay attention
( http://for.tn/2A7ZIlu). The National
Center for Women and IT (NCWIT) has
developed a video titled “Unconscious
bias and why it matters for women and
tech” ( http://bit.ly/2zPxyHW). A recent re-
port from Google and researchers at
Stanford University ( http://bit.ly/2A8WiPL)
presents evidence that unconscious bias
influences teachers’ decisions in CS
classrooms; they recommend profes-
sional development for the teachers, to
help reduce their expression of bias.
Google is funding the development of a
simulation for teachers to address un-
conscious bias ( http://bit.ly/2jhpEkp).
The tech industry recognizes uncon-
scious bias is a significant problem.
Microsoft is making its unconscious
bias training available worldwide
( http://bit.ly/2AsUOyu). Google is asking
60,000 employees to train to recognize un-
conscious bias ( http://read.bi/2kp144m).
So here’s the question: If unconscious bias is pervasive in computing,
and training is our best remedy, how can
untrained students evaluate their CS
teachers without bias?
Computing Research News raised concerns about bias in student evaluations of
CS teaching in 2003 ( http://bit.ly/2koz7tk).
A recent study found students biased against female instructors
( http://bit.ly/2AVRdJZ). There is evidence online students evaluate instructors more highly if they think they
are male ( http://bit.ly/2AZuk95).
I have not seen a study showing bias
in CS students’ evaluations of their
teachers, but the evidence is pretty overwhelming it’s there. How could the students avoid it? We know without training, students evaluate teachers with
bias. We have found unconscious bias
across computing. How could undergraduates evaluate a female CS instructor fairly? What might lead them to evaluate teaching without gender bias?
We have too few women in computer science. We need to recruit more female faculty in CS and retain them. We
need to encourage and reward good
teaching. Biased student evaluations
as our only way to measure undergraduate teaching quality doesn’t help us
with either need.
John Arquilla is professor and chair of defense
analysis at the U.S. Naval Postgraduate School;
the views expressed are his alone. Mark Guzdial is
a professor in the College of Computing at Georgia
Institute of Technology.
© 2018 ACM 0001-0782/18/2 $15.00