Brussels / 31 January & 1 February 2026

schedule

HPC, Big Data & Data Science


09 10 11 12 13 14 15 16 17 18
Sunday Accelerating scientific code on AI hardware with Reactant.jl
ROCm™ on TheRock(s)
JUBE: An Environment for systematic benchmarking and scientific workflows
Scaling Gmsh-based FEM on LUMI: Efficiently Handling Thousands of Partitions
Productive Parallel Programming with Chapel and Arkouda
Track Energy & Emissions of User Jobs on HPC/AI Platforms using CEEMS
Partly Cloudy with a Chance of Zarr: A Virtualized Approach to Zarr Stores from ECMWF's Fields Database
Zero‑Touch HPC Nodes: NetBox, Tofu and Packer for a Self‑Configuring SLURM Cluster
Accelerating complex Bioinformatics AI pipelines with Kubernetes
Observability for AI Workloads on HPC: Beyond GPU Utilization Metrics
Developing software tools for accelerated and differentiable scientific computing using JAX
High Performance Jupyter Notebooks with Zasper
Update on the High Performance Software Foundation (HPSF)
Package management in the hands of users: dream and reality
Spack v1.0 and Beyond: Managing HPC Software Stacks
Status update on EESSI, the European Environment for Scientific Software Installations
Using OpenMP's interop for calling GPU-vendor libs with GCC
A Brief* overview of what makes modern accelerators interesting for HPC

Read the Call for Papers at https://hpc-bigdata-fosdem26.github.io/.

Event Speakers Start End

Sunday

  Accelerating scientific code on AI hardware with Reactant.jl
Mosè Giordano, Jules Merckx 09:00 09:25
  ROCm™ on TheRock(s)
Jan-Patrick Lehr 09:30 09:55
  JUBE: An Environment for systematic benchmarking and scientific workflows
Thomas Breuer 10:00 10:25
  Scaling Gmsh-based FEM on LUMI: Efficiently Handling Thousands of Partitions
Boris Martin 10:30 10:55
  Productive Parallel Programming with Chapel and Arkouda
Jade Abraham 11:00 11:25
  Track Energy & Emissions of User Jobs on HPC/AI Platforms using CEEMS
Mahendra Paipuri 11:30 11:55
  Partly Cloudy with a Chance of Zarr: A Virtualized Approach to Zarr Stores from ECMWF's Fields Database
Tobias Kremer 12:00 12:25
  Zero‑Touch HPC Nodes: NetBox, Tofu and Packer for a Self‑Configuring SLURM Cluster
Erich B, Ümit Seren, Leon Schwarzäugl 12:30 12:55
  Accelerating complex Bioinformatics AI pipelines with Kubernetes
Alessandro Pilotti 13:00 13:10
  Observability for AI Workloads on HPC: Beyond GPU Utilization Metrics
samuel desseaux 13:10 13:20
  Developing software tools for accelerated and differentiable scientific computing using JAX
Matt Graham 13:20 13:30
  High Performance Jupyter Notebooks with Zasper
Prasun Anand 13:35 13:45
  Update on the High Performance Software Foundation (HPSF)
Xavier Delaruelle 14:00 14:25
  Package management in the hands of users: dream and reality
Ludovic Courtès 14:30 14:55
  Spack v1.0 and Beyond: Managing HPC Software Stacks
Harmen Stoppels 15:00 15:25
  Status update on EESSI, the European Environment for Scientific Software Installations
Helena Vela Beltran 15:30 15:55
  Using OpenMP's interop for calling GPU-vendor libs with GCC
Tobias Burnus 16:00 16:25
  A Brief* overview of what makes modern accelerators interesting for HPC
FelixCLC 16:30 16:55