Accelerating scientific code on AI hardware with Reactant.jl
- Track: HPC, Big Data & Data Science
- Room: H.1308 (Rolin)
- Day: Sunday
- Start: 09:00
- End: 09:25
- Video only: h1308
- Chat: Join the conversation!
Scientific models are today limited by compute resources, forcing approximations driven by feasibility rather than theory. They consequently miss important physical processes and decision-relevant regional details. Advances in AI-driven supercomputing — specialized tensor accelerators, AI compiler stacks, and novel distributed systems — offer unprecedented computational power. Yet, scientific applications such as ocean models, often written in Fortran, C++, or Julia and built for traditional HPC, remain largely incompatible with these technologies. This gap hampers performance portability and isolates scientific computing from rapid cloud-based innovation for AI workloads.
In this talk we present Reactant.jl, a free and open-source optimising compiler framework for the Julia programming language, based on MLIR and XLA. Reactant.jl preserves high-level semantics (e.g. linear algebra operations), enabling aggressive cross-function, high-level optimisations, and generating efficient code for a variety of backends (CPU, GPU, TPU and more). Furthermore, Reactant.jl combines with Enzyme to provide high-performance multi-backend automatic differentiation.
As a practical demonstration, we will show the integration of Reactant.jl with Oceananigans.jl, a state-of-the-art GPU-based ocean model. We show how the model can be seamlessly retargeted to thousands of distributed TPUs, unlocking orders-of-magnitude increases in throughput. This opens a path for scientific modelling software to take full advantage of next-generation AI and cloud hardware — without rewriting the codebase or sacrificing high-level expressiveness.
Speakers
| Mosè Giordano | |
| Jules Merckx |