ing to its suitability for assessing abstraction skills. In addition, for each
pattern we presented three open questions regarding:
˲ What specific abstraction skills
does Pattern X measure?
˲ Can you suggest an example that
best fits Pattern X?
˲ Any additional comments on Pattern X?
In this way, we gained both a quantitative and qualitative perspective of the
ABSTRACTION IS CONSIDERED to be a key skill underlying most activities in computer science (CS) and software ngineering (SE).
3, 4 As a
central concept, abstraction is taught
and utilized in various guises in every CS
and SE course: in requirements specifications, problem solving, and modeling
through to programming and debugging. Given its importance, can one assess an individual’s abstraction ability?
Are abstraction skills assessable at all?
If no, why? If yes, how? We decided to
conduct a survey to investigate this topic. As far as we know, this is one of the
first trials to address this challenge.
In 2007, Kramer proposed to develop a test that could assess abstraction skills in the context of CS and SE,
and which would be more multifaceted
than that used in psychometric tests.
In this spirit, we decided to consult experts in CS and SE research and teaching concerning the suitability of various question patterns (or templates)
for assessing abstraction skills. We
specifically used patterns rather than
specific questions in order not to limit
the experts’ line of thought, and, at the
same time, to provide a template that
each instructor could adjust and populate according to his or her needs.
Our data analysis reveals that expert
instructors tend to agree about the
suitability of a pattern for checking ab-
straction ability when it asks students
to construct the abstraction rather
than when applying abstraction prin-
ciples. We explain this approach by the
constructionism learning theory of Pa-
pert and Harel.
We argue that directly asking experts
the question “Are abstraction skills assessable?” would be unlikely to elicit
a clear indication of how to go about
constructing such an assessment tool.
As a concrete proposal, we therefore
developed a set of 10 patterns of questions and asked the experts to assess
each pattern on a 1–10 scale accord-
What makes a good question?