ter the systems need WAN links. Traditionally HFT shops ran two sets of
links, as shown in Figure 3: a high-throughput path and a lower-throughput fast path. For the high-throughput
path, private point-to-point fiber—
10GbE (gigabit Ethernet) is preferred.
For the fast path, each location allows
for options. In the New York metro
area, both millimeter and microwave
solutions are available. These technologies are commonplace for HFT fast-path links, since the reduced refractive
index allows for lower latency.
Feed handler. The feed handler is
often the first bit of code to be implemented by an HFT group. As shown in
Figure 4, the feed handler subscribes
to a market-data feed, parses the feed,
and constructs a “clean” book. This is
traditionally implemented on an FPGA
and has now become a commodity for
the industry ( http://www.exegy.com).
Most feed handlers for U.S. equities
are able to parse multiple market-data
feeds and build a consolidated book
in less than 25 microseconds.
Tickerplant. The tickerplant is the
system component responsible for
distributing the market-data feeds to
the internal systems based on their
subscription parameters (topic-based
subscription), as shown in Figure 5. In
these scenarios, the tickerplant is like
a miniature version of Twitter, with
multiple applications subscribing to
different topics (market-data streams).
In addition to managing topic-based subscriptions, advanced tickerplants often maintain a cache of recent
updates for each instrument (to catch
up subscribers), calculate basic statistics (for example, the moving five-min-ute volume-weighted average price),
and provide more complicated aggregate topics (for example, the value of an
index based on the sum of the underlying 500 securities).
Low-latency applications. In my
experience, most high-frequency algorithms are fairly straightforward in
concept—but their success is based
largely on how quickly they can interact with the marketplace and how
certain you can be of the information
on the wire. What follows is a simple
model (circa 2005), which required a
faster and faster implementation to
continue to generate returns.
To begin, let’s review some jargon.
tions leads to a change in large-scale
exchange-traded funds (ETFs). The
value of an ETF cannot be greater than
the sum of its components (for example, the SPDR S&P 500), so the underlying stocks must change. Therefore, the
state of employment in London will affect the price of the Dollar Tree (DLTR),
which does not have a single store outside North America—it’s a tangled web.
WAN Links. Outside the data cen-
figure 5. tickerplant distributing market-data feeds.
Fe
ed
H
an
d
l
e
r
A
L
GO
1
A
LG
O
2
AL
GO
3
MSFT
AAPL
MSFT
JPMC
AAPL
FB
Ti
ck
e
r
p
l
an
t
FB
JPMC
MSFT
AAPL
figure 6. the order book.
Front of
the queue
Bid/Ask Spread
Best Bid and Best Offer
(Ask). A.K.A. BBO
$8 $9 $10 $11
12
10
10
12
bs1 as0 bs0 as1
Bid Ask
Imbalance =
as0
bs0 + as0
Called pDown
Imbalance =
bs0
bs0 + as0
Called pUP
Size available on
each queue
figure 8. the trading rate.
Prob of execution = 0.47pUP – 1.593
5%
10%
15%
20%
25%
30%
35%
40%
45%
50%
0% 5% 10%15%20%25%30%35%40%45%50%55%60%65%70%75%80%85%90%95% o
fe
xe
cu
t
i
ons
a
t
th
a
tpU
P
pUP
of executions against the bid
figure 7. the queue position.
Time
8
6
start
8
start
10
start