Infrastructure
42µs: How Deterministic Execution Beats Cloud Signing
A deep dive into the physics of HFT latency and why hardware isolation beats network-based signing by orders of magnitude.
42µs: How Deterministic Execution Beats Cloud Signing
Why your signing infrastructure is losing you trades, and how to fix it.
The Speed of Light Problem
Signing latency costs you fills. Not metaphorically—literally.
Light travels at ~200 km/ms in fiber optic cable. When you sign a transaction with AWS KMS, your data must:
- Leave your instance (1-2ms network stack)
- Travel to the KMS endpoint (1-5ms, depending on region)
- Wait in queue (variable, depending on load)
- Get signed (~0.5ms)
- Return (1-5ms)
Total: easily 10-50ms in practice, with high variance. For high-frequency trading, that variance is worse than the average. The market has moved. Your quote is stale. You either miss the trade or get adversely selected.
The Real Problem: Jitter
The average latency matters less than the variance.
If your signing takes 15ms one time and 80ms the next, you can’t size your risk correctly. You have to widen spreads to account for the uncertainty. Every basis point of extra spread is alpha you’re leaving on the table.
This is the real cost of remote signing: not just raw latency, but unpredictability.
The Solution: Eliminate the Network
At ZeroCopy, we asked: what happens when you remove the network entirely from the signing path?
The answer, measured on our benchmark hardware: ~42 microseconds for an ECDSA sign inside an AWS Nitro Enclave over VSock.
Not milliseconds. Microseconds.
How We Get There
The key difference is transport and isolation:
| Factor | Cloud KMS | ZeroCopy Enclave |
|---|---|---|
| Key Location | Remote datacenter | Same physical host |
| Transport | HTTPS over TCP/TLS | VSock (hypervisor channel) |
| Serialization | JSON | Zero-copy binary |
| CPU Scheduling | Shared across tenants | Isolated cores |
| Kernel Interrupts | Hundreds/sec | Suppressed (tickless kernel) |
The Kernel Configuration
Our AMI uses these boot parameters on isolated signing cores:
isolcpus=2-127 nohz_full=2-127 rcu_nocbs=2-127 intel_idle.max_cstate=0
```bash
- Cores 2-127 are dedicated to the signing workload. The OS scheduler cannot place other work there.
- The kernel timer tick is suppressed on those cores when only one task is running.
- C-states are disabled — the CPU stays fully powered, no wake-up penalty.
The 42µs figure comes from this combination: local key material, hypervisor-level transport, and a kernel that gets out of the way.
---
## The Benchmark
Measured on c6i.metal (Ice Lake, 128 vCPU), signing a 32-byte hash:
| Configuration | P50 signing latency | P99 signing latency |
|:--------------|:--------------------|:--------------------|
| AWS KMS (same region) | ~10ms | ~50ms+ |
| ZeroCopy Enclave (VSock, hardened kernel) | ~42µs | ~120µs |
The AWS KMS figures are from our own testing; they vary significantly by region load. The ZeroCopy enclave figure is from our Rust benchmark suite in `sentinel-crypto`.
---
## Verification
The point of enclaves is that you don't have to trust us.
1. **Check enclave measurements:**
```bash
nitro-cli describe-enclaves
- Compare PCRs against our published measurements: docs/measurements/README.md
If the PCR hashes match, you know exactly what code is running inside the enclave — even we can’t substitute different code without changing the measurement.
The Bottom Line
Remote signing services are convenient. They handle key management, rotation, and compliance. Those are real benefits, and for many use cases they’re the right call.
But if you’re running strategies where signing latency shows up in your fill rates — and many do, once you measure carefully — local enclave signing is worth evaluating. The 42µs figure isn’t magic. It’s what you get when you stop routing signing requests across a datacenter.
Questions? security@zerocopy.systems
Continue Reading
Enjoyed this?
Get one deep infrastructure insight per week.
Free forever. Unsubscribe anytime.
You're in. Check your inbox.