semester, we administered the survey
by email to faculty and in paper form
to students in three CS courses: first-semester introductory (CS1), second-semester introductory (CS2), and senior-level capstone design.
We obtained responses from 13 faculty (of a total of 25). For student
surveys, we received 71 surveys from
CS1, 48 from CS2, and 41 from senior
capstone. The survey was voluntary,
though no more than one or two students in each class declined to participate. No surveys contained a response
to the catch item, but we did reject one
survey because the last three pages
had identical responses for each item.
We tallied responses by grouping
“strongly disagree” and “disagree” as
negative responses, “strongly agree”
and “agree” as positive responses,
and all other responses, including
omitted responses, as neutral. We
examined the responses by faculty to
classify the items as either rejected
or endorsed by faculty. Using the criterion that 75% or more of faculty had
to agree to reject or endorse an item,
we excluded five items as not showing consensus among the faculty (see
cluster 2 in the table).
We placed the remaining 27 items
in thematic categories using a combination of what Adams et al. 1 called
“predeterminism” and “raw statistical” grouping. We first sorted them
into groups reflecting our sense of the
relationships among them, without
reference to the data (
predeterminism) and used hierarchical cluster
analysis, a statistical technique, to
identify items participants commonly
responded to in the same way (using
the SPSS 16 package2).
Before we performed cluster analysis, we transformed responses for
items in the same thematic category so
answers reflecting a related underlying
attitude would be coded the same. For
example, we transformed the responses to item 64 (“If you can do something
you don’t need to understand it”) so
a negative response would match a
positive response to item 12 (“I am not
satisfied until I understand why something works the way it does”).
We used the results of the cluster
analysis to modify the groupings to
bring the resulting categories in line
with the data, where appropriate. That
is, where the data showed the partici-
pants commonly answered two items
the same way, we grouped these items
together, even if they were not grouped
together in our original classification.
In other cases, where the data showed
that two items we thought were related
were actually commonly answered dif-
ferently, we adjusted the grouping to
reflect that fact.