Database benchmarks: Lessons learned from running a benchmark standard organization
- Track: Software Performance
- Room: H.1301 (Cornil)
- Day: Sunday
- Start: 13:10
- End: 13:50
- Video only: h1301
- Chat: Join the conversation!
Database vendors often engage in fierce competition on system performance – in the 1980s, they even had their "benchmark wars". The creation of the TPC, a non-profit organization that defines standard benchmarks and supervises their use through rigorous audits, spelled an end to the benchmark wars and helped drive innovation on performance in relational database management systems.
TPC served as a model for defining database benchmarks, including the Linked Data Benchmark Council (LDBC, https://ldbc.org/), of which I've been a contributor and board member for the past 5+ years. Through LDBC's workloads, graph database systems have seen a 25× speedup in four years and a 71× price-performance improvement on transactional workloads.
Defining database benchmarks requires a careful balancing of multiple aspects: relevance, portability, scalability, and simplicity. Most notably, the field in the last few years has shifted toward using simpler, leaderboard-style benchmarks that skip the rigorous auditing process but allow quick iterations.
In this talk, I will share my lessons learned on designing database benchmarks and using them in practice. The talk has five sections:
- The need for database benchmarks
- TPC overview (Transaction Processing Performance Council)
- LDBC overview (Linked Data Benchmark Council)
- The current benchmark landscape (ClickBench, H2O, etc.)
- Takeaways for designing new benchmarks
Speakers
| Gábor Szárnyas |