Infrastructure
The Nanosecond Economy: HFT Infrastructure Fundamentals
FPGA feed handlers, kernel bypass, and the physics of sub-3µs trading. Why infrastructure is the edge in high-frequency markets.
At a competitive HFT desk, the difference between first and second in the queue determines whether you get filled. And queue position comes down to latency — not algorithm cleverness.
This post covers the infrastructure that makes sub-3µs trading possible. Not the strategies, but the plumbing underneath them.
1. The Physics of Speed
At HFT timescales, software overhead dominates:
| Component | Latency | Notes |
|---|---|---|
| Light through 1m fiber | 5ns | Speed of light |
| L3 cache access | 10-20ns | On-die |
| DRAM access | 60-100ns | Off-die |
| Kernel syscall | 200-500ns | Context switch |
| Network interrupt | 1-5µs | IRQ handling |
| TCP stack | 5-10µs | Kernel networking |
The implication: If your trade decision takes 1µs, but your network stack takes 10µs, the decision quality is irrelevant. The stack ate your edge.
The Latency Budget
A competitive HFT system allocates its latency budget deliberately:
Total Budget: ~2.6µs tick-to-trade. Every component must stay within its allocation.
2. Feed Handler Options
The first bottleneck is market data ingestion. Every exchange sends a firehose of quotes.
| Approach | Latency | Cost | Verdict |
|---|---|---|---|
| A. Software (C++ on Linux) | 3-5µs | $50K/year | Baseline. Acceptable for market making, not latency arb. |
| B. Kernel Bypass (DPDK/Solarflare) | 500ns-1µs | $100K/year | Better. Eliminates kernel overhead. |
| C. FPGA Feed Handler | 50-200ns | $500K/year | Wire-speed parsing. Required for the fastest strategies. |
Why FPGA? An FPGA parses the packet as it arrives, byte by byte. There is no store-and-forward. By the time the last byte of a quote arrives, the parsed price is already in your strategy’s cache.
- Source: Xilinx Alveo U50 Datasheet - Sub-100ns parse latency
3. Kernel Bypass Networking
If FPGA is out of budget, kernel bypass with Solarflare/Mellanox NICs eliminates the biggest software overhead.
Step 1: Enable OpenOnload
# Install OpenOnload (Solarflare kernel bypass stack)
onload --profile=latency myapp
# Verify bypass is active
onload_stackdump | grep "UDP\|TCP"
```text
## Step 2: Pin to NUMA Node
```bash
# Bind application to NUMA node 0 (where NIC is attached)
numactl --cpunodebind=0 --membind=0 ./trading_engine
```text
## Step 3: Disable Interrupt Coalescing
```bash
# Solarflare: disable adaptive coalescing
ethtool -C eth0 adaptive-rx off rx-usecs 0 rx-frames 1
```text
**Verification:**
```bash
# Before bypass: 8-12µs RTT
# After bypass: 1-2µs RTT
ping -c 100 <exchange_gateway> | tail -1
```nginx
These numbers are representative of what the Solarflare/OpenOnload stack achieves — your actual results depend on hardware, kernel version, and network topology.
## 4. Verifying Your Kernel State
Before investing in FPGA or kernel bypass, verify your baseline configuration is sane. Check for common issues that add unnecessary latency:
- `irqbalance` running (moves your interrupts unpredictably)
- `nohz_full` not set on latency-critical cores (timer interrupts)
- Transparent huge pages enabled (causes allocation stalls)
- Interrupt coalescing still on after you turned it off
```bash
# Check scheduler for latency-critical CPU cores
cat /sys/bus/cpu/devices/cpu0/cpufreq/scaling_governor
# Check irqbalance status
systemctl status irqbalance
# Check THP
cat /sys/kernel/mm/transparent_hugepage/enabled
5. Trade-offs
-
FPGA vs Software: FPGAs are 10x faster but 10x harder to debug. Your strategy complexity is limited by FPGA development velocity.
-
Kernel Bypass vs Observability: When you bypass the kernel, you lose
tcpdump,netstat, and standard debugging. You need custom tooling. -
Cost Curve: The last microsecond costs 10x more than the first. Know when to stop optimizing.
6. The Core Insight
In HFT, infrastructure is not a cost center. It is a profit function.
The difference between a profitable desk and a losing one is often not the algorithm — it’s whether you’re first or second in queue. In a zero-sum game, second place is the first loser.
When someone asks about your edge, the honest answer is often: “Our plumbing is better.”
Continue Reading
Enjoyed this?
Get one deep infrastructure insight per week.
Free forever. Unsubscribe anytime.
You're in. Check your inbox.