Benchmarking guide
Measure Velocity performance, interpret the results, and share them with the community.
Last updated October 8, 2025View on GitHub
Benchmark with confidence
Good benchmarks tell you whether a change helps or hurts. Use this guide to set up reliable tests, run the included harnesses, and publish results others can reproduce.
Prepare the lab
- Reserve dedicated instances or bare metal—avoid noisy neighbours.
- Pin CPU frequency and bind Velocity to a NUMA node when possible.
- Capture kernel version, NIC model, offload settings, and whether AF_XDP/io_uring acceleration is enabled.
- Document the configuration so future runs can match it.
Core workloads
Harness | Location | What it answers |
---|---|---|
Handshake microbenchmarks | benchmarks/handshake-bench | How do light , balanced , and secure profiles compare? |
Page-load simulation | benchmarks/page-load/ | Does Velocity improve end-user latency versus HTTP/3? |
AF_XDP fast-path | benchmarks/fast-path/ | What throughput can the edge deliver under sustained load? |
Run the suite
cargo bench -p handshake-bench
node benchmarks/page-load/run.js --profile balanced --duration 180
Export CSVs to bench/results/<date>-<label>.csv
. Include the environment metadata in the filename or a companion README.
Read the numbers
- Track medians and tails (
p95
,p99
). Regressions usually surface at the tail first. - Compare CPU per request while holding throughput constant. Velocity should remain within ~10% of baseline workloads.
- Look for downgrade spikes—if clients fall back to HTTP/3 often, the benchmark is measuring the wrong thing.
Publish findings
- Open a PR with your CSVs, hardware summary, and Grafana screenshots. The maintainers love real-world data.
- Flag anomalies via the
performance
label so they land in the next optimization sprint. - Link to the exact commit or release you benchmarked so others can replicate the run.