Brussels / 31 January & 1 February 2026

schedule

How to Measure Software Performance Reliably


Reliable performance measurement remains an unsolved problem across most open source projects. Benchmarks are often an afterthought, and when they aren't they can be noisy, non-reproducible, and hard to act on.

This talk shares lessons learned from building a large-scale benchmarking system at Datadog and shows how small fixes can make a big difference: controlling environmental noise, designing representative workloads, interpreting results with sound statistical methods, and more.

We’ll show a real case study to demonstrate how rigorous benchmarking can turn assumptions about performance into decisions backed by data.

Attendees should leave with practical principles they can apply in their own projects to make benchmarks trustworthy and actionable.

Speakers

Photo of Kemal Akkoyun Kemal Akkoyun
Photo of Augusto de Oliveira Augusto de Oliveira

Links