How to Reliably Measure Software Performance
- Track: Software Performance
- Room: H.1301 (Cornil)
- Day: Sunday
- Start: 11:50
- End: 12:30
- Video only: h1301
- Chat: Join the conversation!
Reliable performance measurement remains an unsolved problem across most open source projects. Benchmarks are often an afterthought, and when they aren't they can be noisy, non-repeatable, and hard to act on.
This talk shares lessons learned from building a large-scale benchmarking system at Datadog and shows how small fixes can make a big difference: controlling environmental noise, designing benchmarks, interpreting results with sound statistical methods, and more.
Attendees should leave with practical principles they can apply in their own projects to make benchmarks trustworthy and actionable. We'll illustrate each principle with real data — for instance, environment tuning that cut variance by 100x, or design changes that turned a flaky benchmark into a reliable one.
Speakers
| Kemal Akkoyun | |
| Augusto de Oliveira |