to not do any scaling whatsoever. Although desirable for
research experiments, we chose not to do this given our
short deadline because we were not very familiar with
modifying the Google Glass OS and we didn’t want any
more surprises. The second, reliable enough, method is to
actively cool the Google Glass with an external device.
We chose the technologically advanced ice pack, from a
freezer in our office. In the opening image, you can see our
innovative experimental setup featuring a Google Glass
($1,500 Explorers Edition), and a $5.99 ice pack. Trained
professionals were at hand during all experiments. You
may or may not want to try this at home; I haven’t asked
Google if this voids any warranties.
Just like a sick person applying a damp cloth or ice
to their feverish forehead, we used an ice pack wrapped
around our Google Glass to keep its temperature down.
Once we had done this, our Google Glass CPU frequency
stayed stable for our latency measurements. We made
sure we kept an eye on it, but with the ice pack in place the
latency jitter disappeared!
What about your desktop? Could the CPU frequency
on a desktop affect network latency? Yes! CPU frequency
affects network latency on a desktop as shown in Figure 2.
At the highest frequency for the CPU on my desktop there
is minimal jitter and the maximum observed latency was
under 4 milliseconds to a server one hop away. Contrast
this with the other possible frequencies my CPU can run
at; they each have more jitter, as well as higher maximum
observed latencies.
Investigating the outliers in Figure 3 shows a clear
differentiation between the highest CPU frequency for my
desktop, and the other levels it can run at. All of the lower
frequencies experienced greater latencies overall.
A lower CPU frequency implies higher latency for any
task. It just cannot do the same amount of work as a
system running at a higher frequency. You can imagine
as you lower the CPU frequency, you increase the number
of instructions that get queued up waiting for CPU time
because it is running slower. This queue is basically
unbounded; you could use all of RAM or even spillover
to swap space on disk queuing up instructions for your
CPU. And the larger the queue size, the higher the latency.
Intuitively this makes sense, but I haven’t seen many
people explore this quantitatively!
Biography
Wolfgang Richter is a fifth year Ph. D. student in Carnegie Mellon University’s Computer
Science Department. His research focus is in distributed systems and he works under
Mahadev Satyanarayanan. His current research thread is in developing technologies
leading to introspecting clouds. tl;dr: Cloud Computing Researcher.
Figure 2. Raw latency traces (milliseconds y-axis,
10,000 packets) for a desktop with CPU scaling to a one
hop away server.
4
3. 5
2. 5
1. 5
0.5
1
2
3
12
10
8
6
4
2
12
10
8
6
4
2
8
7
6
5
4
3
2
1
8
6
4
2
1.60GHz
1.86GHz
2.13GHz
2.39GHz
2.66GHz
Figure 3. Six outliers showing the tail of the latency traces.
La
te
n
cy(
mi
ll
is
ec
o
n
ds
)
1
024
2.66GHz
2.39GHz
2.13GHz
1.86GHz
1.60GHz
Outlier
68
5
9
13
17
The amount NewME Accelerator has
helped raise for more than 200 underrepresented entrepreneurs since 2011.
$8M