ifying kernels and/or running real-time
kernels. I left HFT in late 2005 and returned in 2009, only to discover the
world was approaching absurdity: by
2009 we were required to operate well
below the one-millisecond barrier, and
were looking at tick-to-trade requirements of 250 microseconds. Tick to
trade is the time it takes to:
1. Receive a packet at the network
interface;
2.Process the packet and run
through the business logic of trading;
3. Send a trade packet back out on
the network interface.
To do this, we used real-time kernels
and in my shop, we had begun implementing functionality on the switches
themselves (the Arista switch was Linux
based, and we had root access). We must
not have been alone in implementing
custom code on the switch, because
shortly after, Arista made a 24-port
switch with a built-in field-programma-ble gate array (FPGA).
1 FPGAs were becoming more common in trading—
especially in dealing with the increasing
onslaught of market-data processing.
As with all great technology, using
it became easier over time, allowing
more and more complicated systems
to be built. By 2010, the barriers to entry into HFT began to fall as many of
the more esoteric technologies developed over the previous few years became commercially available. Strategy
development, or the big-data problem
of analyzing market data, was a great
example. Hadoop was not common in
many HFT shops, but the influx of talent in distributed data mining meant
a number of products were becoming
more available. Software companies
(often started by former HFT traders)
were now offering amazing solutions
for messaging, market-data capture,
and networking. Perhaps as a result
of the inevitable lowering of the barriers to entry, HFT was measurably more
difficult by 2010. Most of our models at
that time were running at half-lives of
three to six months.
I remember coming home late one
night, and my mother, a math teach-er, asked why I was so depressed and
exhausted. I said, “Imagine every day
you have to figure out a small part of
the world. You develop fantastic machines, which can measure everything,
and you deploy them to track an object
(minimum price increment) from
1/16th of a dollar to $0.01 per share.
What this meant was that “overnight
the minimum spread a market-maker
(someone who electronically offered
to both buy and sell a security) stood
to pocket between a bid and offer was
compressed from 6. 25 cents…down to
a penny.”
5
This led to an explosion in revenue
for U.S. equity-based HFT firms, as
they were the only shops capable of
operating at such small incremental
margins through the execution of mas-
sive volume. Like the plot of Superman
III, HFT shops could take over market-
making in U.S. equities by collecting
pennies (or fractions of a penny) mil-
lions of times a day. I was not trading
stocks, however; I was trading futures
and bonds. Tucked inside a large Wall
Street partnership, I was tackling mar-
kets that were electronic but outside
the purview of the average algorithmic
shop. This is important as it meant we
could start with a tractable goal: build
an automated market-making system
that executes trades in under 10 milli-
seconds on a 3.2GHz Xeon (130nm). By
2004, this was halved to five millisec-
onds, and we were armed with a 3.6GHz
Nocona. By 2005 we were approaching
the one-millisecond barrier for latency
arbitrage and were well into the over-
clocking world. I remember bricking a
brand-new HP server in an attempt to
break the 4.1GHz barrier under air.
By 2005, most shops were also mod-
figure 2. monitoring a packet burst.
figure 1. using nAt to conform to the exchange transit network.
10 Gbe: MMF or SMF Internal: Exchange 1
10.0.0.1
Internal: Exchange 2
10.0.0.2
Internal: Exchange 3
10.0.0.3
10 Gbe: MMF or SMF
10 Gbe: MMF or SMF
192.168.1.1
Ex
c
ha
n
ge
1
192.168.2.1
192.168.3.1
Ex
c
ha
n
ge2
E
x
c
ha
n
ge3
N
AT
De
vi
ce