Article development led by
I have been reworking a device driver
for a high-end, high-performance
networking card and I have a resource
allocation problem. The devices I am
working with have several network
ports, but these are not always in use;
in fact, many of our customers use
only one of the four available ports. It
would greatly simplify the logic in my
driver if I could allocate the resources for all the ports—no matter how
many there are—when the device
driver is first loaded into the system,
instead of dealing with allocation
whenever an administrator brings up
I should point out that this device
has a good deal of complexity and the
resource allocation is not as simple as
a quick malloc of memory and pointer
jiggling—a lot of moving parts are inside this thing.
We are not talking about a huge
amount of memory by modern standards—perhaps a megabyte per
port—but it still bothers me to waste
memory, or really any resource, if it is
not going to get used. I am old enough
to remember eight-bit computers
with 64 kilobytes of RAM, and programming those gave me a strong internal incentive never to waste a byte,
let alone a megabyte. When is it OK
to allocate memory that might never
be used, even if this might reduce the
complexity of my code?
fearful of footprints
The answer to your question is easy.
It is sometimes OK to allocate memory that might never be used, and it is
sometimes not OK to allocate the same
memory. Ah, are those the screams of
a programmer without a black-and-white, true-or-false answer to the question that I hear? Delightful!
Software engineering, much to your
and my chagrin, is the study of trade-offs. Time vs. complexity, expediency
vs. quality—these are the choices we
deal with every day. It is important for
engineers to revisit their assumptions
periodically, perhaps every year or two,
as the systems we work on change under us quite quickly.
Programmers who are paying attention to the systems they use—and
I know that each and every one of my
readers is paying attention—have seen
these systems change dramatically
over the past five years, just as they had
the five years before that, and so on,
back to the first computers. While
processor frequency scaling may have
paused for the moment (and we will
see how long that moment lasts), the
size of memory has continued to grow.
It is not uncommon to see single servers with 64 and 128 megabytes of RAM,
and this explosion of available memory
has led to some very poor programming practices.
Blindly wasting resources, such as
memory, really is foolish, but in this
case it is not an engineering trade-
off, it is an example of a programmer
who is too far from their machine
trying to “just make it work.” That is
not programming, that is just typing.
Software engineers and program-
mers worth their expensive chairs and
high salaries know they do not want
to waste resources, so they try to fig-
ure out what the best- and worst-case
scenarios are and how they will affect
the other possible users of the system.
Users in most cases are now just other
programs, rather than other people,
but we all know what happens to a sys-
tem when it starts to swap things out
of memory onto secondary storage.
That’s right, your DevOps people call
you screaming at 3 a.m. Screaming
people are never that much fun, ex-
cept at a concert.
You mentioned this software is for
a “high-performance” device, and if by
that you mean it goes in a typical 64-bit
server-class machine, then no one is really going to notice a megabyte, or four,
or even eight. A high-end server-class
machine is unlikely to have less than
four gigabytes of RAM. Even if you allocate four megabytes at system start-up time, that is one-tenth of 1% of the
available RAM. People writing in Java
will suck down far more than that just
starting their threads. Are you really going to worry about less than one-tenth
of a percent of memory?
If you had told me that this driver
was for some limited-memory-size
embedded device, I would give other
a lesson in resource
Waste not memory, want not memory—unless it doesn’t matter.