Infrastructure for Continuous Assessment of Retained Relevant Knowledge
the course requires prior to beginning the class (prerequisite
knowledge) and to the knowledge topics that would then be
developed within that course. The mapping is then extended to ABET criteria by mapping each knowledge topic to the
ABET student learning outcome(s) [ 1, 4] covered. This linkage
between courses, knowledge topics, and student learning outcomes provides a way of looking at the overall development of
student knowledge as each student progresses throughout the
program. It also ensures that no gaps exist between what the
instructor expects the students to know and what the students
have already been taught.
CREATING DIRECT ASSESSMENT INSTRUMENTS
Using the mapping of knowledge topics to courses, a prerequisite
quiz is developed to test incoming students on the knowledge
topics (and, by mapping, the associated ABET student learning
outcomes) that instructors assumed the students had learned
in prior courses. To guarantee measures against national standards, an effort was made to use questions developed externally from the university (such as Computer Science subject GRE
style questions) when possible. Instructors teaching subsequent
courses (not the courses in which the topics are introduced) reviews and add questions as needed. Finally, the course coordinator and departmental curriculum committee review the questions to ensure that they are within the scope of the course. The
distancing of the question creation from the common instructors
is intentional. It is vital that the assessment of mastery be tied to
national (and not instructor by instructor) based standards.
The quiz is given to all students in the course at the beginning of the course through an online assessment system [ 2]. Staff
members handle administrative details such as ensuring that the
quizzes are posted for each core course. Whenever possible, the
quiz is administered in an unused lab period at the beginning
of the term. We have found that this gives the fullest participation. If a lab period is not available, the quiz may be given during
class, but it is more often given as a take home assignment.
Most instructors chose to not have the quiz scores impact the
students’ grades. This allows both more freedom for unsupervised administration and helps provide to the students a better
low-impact measure of their preparedness for the course. Students are incentivized to take the prerequisite quiz both by its
results (indicating areas where they might need review or help
from the course instructor) and sometimes additional instruc-tor-based incentives (unlocking of online course materials, etc).
The quiz results help assess the retained knowledge of the
students as they progress through the program without the bias
of student’s opinions of their own knowledge (a common concern with indirect assessment measures). Currently, we collect
data from nine courses that form the core of our computer science and computer engineering curricula.
COLLECTING THE RESULTS
Each term, the results from all the course quizzes are collected
and stored within a database to be used to assess the overall
program effectiveness. The assessment tools are delivered as
1. The assessment provides continuous periodic direct
measurements of retained relevant knowledge.
2. The assessment outcome is immediately valuable to the
assessment participants (students and faculty) as well as
the continuous improvement of the program.
3. The assessment is not unduly burdensome.
The goal of assessment is to provide data to measure (or illustrate a need for) improvement. The definition of the assessment
standards then set a target goal towards which a program continuously strives. Although program objectives differ significantly
among institutions, certain learning outcomes are expected of
graduates of computer science and engineering programs. We
believe that the standard towards which programs should strive
in Engineering is best communicated not only by the accreditation agencies but also by the appropriate discipline-specific
international professional society. These societies maintain and
regularly update the themes, knowledge areas, and professional
practices expected of those entering their discipline.
In Computer Science, the Joint Task Force on Computing
Curricula between the Association for Computing Machinery
(ACM) and IEEE-Computer Society provides regularly up-dat-ed standards in curriculum, most recently in the volumes
Computer Science Curricula 2013 (CS2013) [ 6] and Computer
Engineering Curricula 2016 (CEG2016) [ 5]. The CS2013 and
CEG2016 Body of Knowledge organizes the expectations of
computing graduates into Knowledge Areas (KA) which are
created, revised, and removed as the discipline changes over
time. Each of these KAs is further specified as a set of Knowledge Units each of which specifies a set of Knowledge Topics
expected at the time of graduation. CS2013 and CEG2016 can
serve as “gold standards” for contemporary computing education in computer science and computer engineering programs.
While acknowledging that every program has differing
educational objectives, use of professional society standards
provide metrics which can gauge the success of the
program against a national model. Such metrics suggest an
infrastructure for direct assessment that allows comparison
against discipline-wide expectations and to allow reflection on
the need, causes, and appropriateness of any major deviations
from the widespread consensus proposed by the discipline’s
In an effort to compare our students’ experiences to those
across the nation, our first step is to create a mapping between our courses and the recommended knowledge topics
from the professional societies. Working with core program
faculty, we map every mandatory course in Computer Science and Computer Engineering to the knowledge topics that