SIGCSE: Now and Moving Forward
The International Computing
Education Research (ICER)
Sally Fincher, University of Kent
Although the subject of much speculative discussion in the arly 2000s, there was a very real moment when ICER was
brought to life. It was in a bar in the conference venue of the
SIGCSE Symposium. I was talking with Richard Anderson,
some of his PhD students from the University of Washington,
and participants of the “Bootstrapping Research in Computer
Science Education” workshop series. We started to talk about
why there wasn’t a dedicated Computing Education Research
conference. I said, “If I’m re-elected to the SIGCSE Board, I’ll
work to establish one.” Richard said, “If you do, I’ll support it”.
At their fall meeting 2004, the SIGCSE Board gave the go-ahead
to start a new research-focused conference, and the first ICER was
hosted at the University of Washington, Seattle on October 1–2,
2005 (scheduled to avoid football games). From the outset, ICER
was built on the idea that research is formed and sustained by discourse—a research community needs places to publish (and read)
and to meet (and talk). The nascent computing education research
community needed ICER to be a high-quality venue, but also a
high value one, and this informed all our design decisions.
At the start, we could see no way to separate the work this conference needed into traditional roles. There were too few people interested in computing education research to go around. So
ICER was formed and led by the Triumvirate: three researchers
who would all serve for three years, each hosting the conference
at their own institution once during that time. The first Triumvirate was composed of Richard Anderson, Mark Guzdial, and me.
A research community is not located in a single geographical area.
Unlike the Symposium, it would be important for the conference
to be in different areas of the world, while acknowledging the
larger population of American SIGCSE members. We chose the
location pattern of North America, Europe, North America, Aus-
tralasia; the pattern proved satisfactory and persists to this day.
Around this time (2004) there was considerable disquiet and debate about reviewing for the Symposium, with no control on who
could sign up as a reviewer. In establishing ICER we took the view
that reviews should be conducted by a community of peers, so
an early decision was to keep reviews within the named program
committee. Thus, as an author, you knew in a broad sense who
was reviewing your paper and so you could have confidence in
their qualification to do so (even if you might not like their opinion!). In the early days, we also took the view that the opinions of
the reviewers were there to inform the judgement of the Chairs.
In founding ICER we had to establish norms and standards
for submission and reviewing. Initially, Richard, Mark, and I
read every paper and all the reviews—if we, as Chairs, liked the
paper, it was accepted, even if it involved contradicting some
(or all) reviewers’ opinions and over-ruling their recommendations (and vice versa). So, from the outset we added additional
comments when returning reviews to authors. This practice
was largely successful, and generally appreciated, as one 2005
author responded “Thanks for the detailed comments, both
from the PC and from the reviewers. I’m blown away by the
thoroughness and thoughtfulness of the reviews. I wish all conferences maintained such high reviewing standards!”
This “closed reviewer pool” model has, in some years, led to
extraordinary loads for reviewers—and rebellion—and different methods have been tried to balance that. In some years, a
more mechanistic approach has been adopted (“adding up the
reviewer scores”) more recently this has been addressed by the
introduction of tiers of review with a pool of “meta-reviewers”
each assigned to groups of papers.
If a conference is to help support and nourish a community, then
people must exchange views. So, from the start, ICER was conceived as a place to have conversations. ICER was designed as, and