Discussion
Our productivity analysis found that although the productivity of the CS areas
range from 2. 5 (MIS) to 7. 8 (DC) papers
per year, the only significant differences are between the extremes of the
spectrum. The total productivity of researchers in ARCH, COMM, DC, and
IPCV is significantly higher than those
for researchers in MIS and OR. The total productivity of the other areas does
not differ significantly. Thus CS departments and evaluation bodies should be
mindful when comparing researchers
in MIS and OR to researchers in ARCH,
COMM, DC, and IPCV.
Some evaluation criteria, especially
those that apply to disciplines other
than CS, put more emphasis on journal publications. CS departments that
emphasize journal publications must
be mindful that BIO in one group, and
all marked areas without a “d” in the
second column of Table 3 in the other,
have significantly different journal productivity. However, BIO journal publication practices are not significantly
different from those of COMM, MIS,
ML, and OR.
There are more pronounced differences regarding whether the areas
are conference- or journal-oriented in
their publication practices. BIO, MIS,
and OR are clearly journal-oriented
and significantly different from the
other areas. ML and TH are also significantly different from the most conference-oriented areas.
Regarding citations, there are significant differences among MIS (by
itself), BIO, DB, HCI, and GRAPH (in
another group), and finally, ARCH and
MM. There is also an interesting negative correlation between productivity
and citation rates beyond the influence
of one area’s emphasis on conference
or journal publications.
Consider, too, these other interesting findings:
˲ We included BIO and SEC as examples of new CS areas. BIO indeed
reflects very different publication and
citation patterns from most other CS
areas. SEC publication and citation
patterns are not different from the majority;
˲ BIO, MIS, and OR are less-central
CS areas, in the sense that a larger proportion of researchers in them are not
in CS departments though, to our sur-
cs areas that
may be limited in
their citation rates
may consider
encouraging all
papers, especially
conference papers,
to include more
elaborate analysis
of the related
literature.
prise, likewise IPCV. In some sense this
non-centrality might indicate these areas are more interdisciplinary or multidisciplinary. In terms of publication
and citation practices they differ somewhat from the bulk of CS, as discussed
earlier, probably due to CS researchers adapting their practices to that of
their research colleagues in other disciplines; and
˲ As far as our sampling was able to
identify student availability per CS researcher, MM and ARCH seem to have
the most students per CS researcher,
while MIS, DC, and TH have the fewest.
Our research quantifies informa-
tion researchers in the various CS
areas already known, as in, say, the
emphasis some of them put on confer-
ence publications. Some CS research-
ers have intuition regarding the differ-
ences among the areas derived from
their personal observations of col-
leagues and acquaintances in these
areas. However, as discussed earlier,
before we began this research, this
intuition should have been viewed
as unproved beliefs gathered from a
limited sample of convenience. We
derive our conclusion from a random
sample of 30 researchers worldwide
and of 100 papers in each CS area. On
the other hand, our research should
be viewed as only a first step toward
understanding the differences among
CS areas. Moreover, our conclusions
are limited by some issues that need
to be further discussed:
The first is that our sampling of
researchers introduced some bias.
We discovered it is more likely that
a non-senior faculty researcher in a
university in a Western country would
have an up-to-date publications page
than the alternatives, including, say,
a researcher in an Eastern country,
students, industry-based researchers,
and senior faculty researchers. Given
that junior faculty are the researchers
most likely to be evaluated through
some of the metrics covered here,
this bias has a limited effect. How-
ever, faculty in non-Western universi-
ties should take care when using our
results, as they may not reflect their
professional experience.
The second issue is sample size.
Sampling researchers is labor intensive, so the sample size is small and the
standard error associated with the mea-