NixOS for Deterministic Distributed-System Benchmarking
- Track: Nix and NixOS
- Room: UA2.118 (Henriot)
- Day: Saturday
- Start: 16:15
- End: 16:35
- Video only: ua2118
- Chat: Join the conversation!
Reproducibility remains one of the largest challenges in benchmarking distributed systems, especially when hardware, kernel settings, and dependency versions vary between tests. This talk presents a NixOS-based approach for constructing deterministic, portable benchmark environments for large-scale data infrastructure. We show how Nix’s declarative system configuration, content-addressed builds, and reproducible packaging model allow engineers to isolate performance variables and eliminate configuration drift entirely. Using Apache Cassandra as the primary case study, the talk demonstrates how NixOS can define and reproduce complete cluster environments—from OS images to JVM parameters and custom benchmarking tools—across both cloud and on-prem setups. Attendees will learn practical patterns for packaging workloads, pinning dependencies, and generating ephemeral benchmark nodes. The session concludes with a discussion of how Nix abstractions can support multi-architecture testing, including cross-compiling workloads for ARM and RISC targets, enabling more transparent and repeatable performance comparisons.
Speakers
| Bruce Gain |