Brussels / 31 January & 1 February 2026

schedule

Accelerating scientific code on AI hardware with Reactant.jl


Scientific models are today limited by compute resources, forcing approximations driven by feasibility rather than theory. They consequently miss important physical processes and decision-relevant regional details. Advances in AI-driven supercomputing — specialized tensor accelerators, AI compiler stacks, and novel distributed systems — offer unprecedented computational power. Yet, scientific applications such as ocean models, often written in Fortran, C++, or Julia and built for traditional HPC, remain largely incompatible with these technologies. This gap hampers performance portability and isolates scientific computing from rapid cloud-based innovation for AI workloads.

In this talk we present Reactant.jl, a free and open-source optimising compiler framework for the Julia programming language, based on MLIR and XLA. Reactant.jl preserves high-level semantics (e.g. linear algebra operations), enabling aggressive cross-function, high-level optimisations, and generating efficient code for a variety of backends (CPU, GPU, TPU and more). Furthermore, Reactant.jl combines with Enzyme to provide high-performance multi-backend automatic differentiation.

As a practical demonstration, we will show the integration of Reactant.jl with Oceananigans.jl, a state-of-the-art GPU-based ocean model. We show how the model can be seamlessly retargeted to thousands of distributed TPUs, unlocking orders-of-magnitude increases in throughput. This opens a path for scientific modelling software to take full advantage of next-generation AI and cloud hardware — without rewriting the codebase or sacrificing high-level expressiveness.

Speakers

Photo of Mosè Giordano Mosè Giordano
Jules Merckx

Links