ly handled return code when the pool
is empty, and later results in a double-free when the consumer attempts to
reenqueue the same object multiple
times into a pool but fails because the
pool is then full.
These statements were part of a
thought process around solving a real
bug. The first hypothesis is indicative
of very little planning or research, and
is the result of the “hunch” programmers have about what could constitute
a bug. This is a testable hypothesis,
but it is poor: if this hypothesis is confirmed through testing, the testing is
unable to provide any more data on
how to solve the problem.
The second statement is marginally better. It’s clear that it is operating
on more information, and so it seems
like the bug has been reproduced at
this point. This hypothesis is still incomplete, because it does not make
any predictions as to why concurrent
log producers would produce the
same item. Furthermore, though it
sounds like it describes what the failure is (a race condition), this is not actually the terminal flaw, as described
in the third hypothesis.
This third hypothesis is clearly the
best. It describes both why the bug
happens and what the failure is. Importantly, it identifies that the cause of
failure occurs separately from where
and when the program actually fails.
This hypothesis is great because it can
be very specifically tested. If regression tests are part of your development
framework, only this hypothesis provides a description of how such a test
should behave.
Falsifiability is an important and
crucial property of a real hypothesis.
If a hypothesis cannot be proven false,
any test will confirm it. This cannot
possibly give you confidence that you
understand the issue.
Forming a sound hypothesis is
important for other reasons as well.
Mental models can be used to intuit
the causes of some bugs, but for the
more difficult problems, relying on
the mental model to describe the
problem is exactly the wrong thing
to do: the mental model is incorrect,
which is why the bug happened in
the first place. Throwing away the
mental model is crucial to forming a
sound hypothesis.
This may be harder than it seems.
For example, comments in code suspected to contain bugs may reinforce
existing mental models. This may
cause you to paper over buggy code,
thinking it is obviously correct. Consider, for example:
/* Flush all log entries */
for (i = 0; i <= n _ entries; i++)
{ flush _ entry(&entry[i]); }
This code (maybe obviously) illustrates an example of an off-by-one error. The comment above it is correct
but incomplete. This code will flush
all entries. It will also flush one more.
When debugging, treat comments as
merely informative, not normative.
Conclusion
Debugging is one of the most difficult
aspects of applied computer science.
Individuals’ views and motivations
in the area of problem solving are becoming better understood through
far-reaching research conducted by
Carol Dweck and others. This research
provides a means to promote continued growth in students, colleagues,
and yourself.
Debugging is a science, not an art.
To that end, it should be embraced as
such in institutions of higher learning.
It is time for these institutions to introduce entire courses devoted to debugging. This need was suggested as far
back as 1989.10 In 2004, Ryan Chmiel
and Michael C. Loui observed that “[t]
he computing curriculum proposed
by the Association for Computing Machinery and the IEEE Computer Society makes little reference to the importance of debugging.”
2 This appears to
still be true.
Learning solely through experience
(suggested by Oman et al. as the prima-
ry way debugging skills are learned10)
is frustrating and expensive. At a time
when the software engineering in-
dustry is understaffed, it appears that
individuals with certain self-theories
resulting from social and cultural influ-
ences are left behind. Understanding
Dweck’s work and changing the way we
approach education, mentorship, and
individual study habits can have a pro-
found long-term effect on the progress
of the software development industry.
While research into tools to ease the
task of debugging continues to be im-
portant, we must also embrace and
continue research asking and show-
ing how better to help students, col-
leagues, and peers toward success in
computer science.
Related articles
on queue.acm.org
Undergraduate Software Engineering
Michael J. Lutz, J. Fernando Naveda, and
James R. Vallino
http://queue.acm.org/detail.cfm?id=2653382
Coding Smart: People vs. Tools
Donn M. Seeley
http://queue.acm.org/detail.cfm?id=945135
Interviewing Techniques
Kode Vicious
http://queue.acm.org/detail.cfm?id=1998475
References
1. Britton, T., Jeng, L., Carver, G., Cheak, P. and
Katzenellenbogen, T. Reversible debugging software.
Cambridge Judge Business School, 2013;
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.
1. 1.444.9094&rep=rep1&type=pdf.
2. Chmiel, R., Loui, M.C. 2004. Debugging: from novice to
expert. SIGCSE Bulletin 36, 1 (2004), 17–21.
3. Cutts, Q., Cutts, E., Draper, S., O’Donnell, P. and
Saffrey, P. Manipulating mindset to positively
influence introductory programming performance. In
Proceedings of the 41st ACM Technical Symposium on
Computer Science Education, 2010, 431–435.
4. Duckworth, A. L., Peterson, C., Matthews, M. D., Kelly,
D.R. Grit: Perseverance and passion for long-term
goals. J. Personality and Social Psychology 92, 6
(2007), 1087–1101.
5. Dweck, C. Self-theories: Their Role in Motivation,
Personality, and Development. Psychology Press, 1999.
6. Kernighan, B. W. and Plauger, P.J. The Elements of
Programming Style. McGraw-Hill, 1974.
7. Ko, A.J. and Meyers, B.A. A framework and
methodology for studying the causes of software
errors in programming systems. J. Visual Languages
and Computing 16, 1-2 (2005), 41–84.
8. McCauley, R., Fitzgerald, S., Lewandowski, G., Murphy,
L., Simon, B., Thomas, L. and Zander, C. Debugging: a
review of the literature from an educational perspective.
Computer Science Education 18, 2 (2008), 67–92.
9. Murphy, L., Thomas, L. 2008. Dangers of a fixed
mindset: implications of self-theories research for
computer science education. SIGCSE Bulletin 40, 3
(2008), 271–275.
10. Oman, P. W., Cook, C.R. and Nanja, M. Effects of
programming experience in debugging semantic
errors. J. Systems and Software 9, 3 (1989), 197–207.
11. R TI. The economic impacts of inadequate
infrastructure for software testing, 2002; http://www.
nist.gov/director/planning/upload/report02-3.pdf.
12. Scott, M. and Ghinea, G. On the domain-specificity of
mindsets: the relationship between aptitude beliefs
and programming practice. IEEE Transactions on
Education 57, 3 (2014), 169–174.
13. Winslow, L. Programming pedagogy—A psychological
overview. SIGCSE Bulletin 28, 3 (1996), 17–22.
14. Yorke, M. and Knight, P. Self-theories: some implications
for teaching and learning in higher education. Studies in
Higher Education 29, 1 (2004), 25–37.
Devon H. O’Dell is a tech lead at Fastly, where
his primary focus includes mentorship of team members
and the scalability, functionality, and stability of Fastly’s
core caching infrastructure. Previously, he was lead
software architect at Message Systems and contributed
heavily across the Momentum high-performance
messaging platform.
Copyright held by author.
Publication rights licensed to ACM. $15.00.