Programming models with the ROCm™ compiler
- Track: HPC, Big Data & Data Science
- Room: UB5.132
- Day: Sunday
- Start: 11:35
- End: 12:00
- Video only: ub5132
- Chat: Join the conversation!
This presentation showcases the different programming models that users of the AMD ROCm™ compiler can employ to offload computation to AMD GPUs. The talk highlights the interoperability of the programming models (HIP, OpenMP®, OpenCL™) with the supported languages (C, C++, Fortran), given the common compiler infrastructure. The talk summarizes the different compilers that AMD provides to address the confusion and point users to the correct compiler for their application. Since programming environments are not complete without libraries, the talk also showcases available libraries, such as rocBLAS. The talk provides a high-level overview together with examples on how to use the different approaches.
First, it covers some fundamentals of GPU programming and the execution model of HIP. It then shows how to construct an easy HIP kernel from scratch. This is followed by a look at the HIPIFY tool to port codes from CUDA to HIP. Second, the talk covers OpenMP® as GPU programming model in both C++ and Fortran. It highlights the difference between HIP and OpenMP® when it comes to control and convenience. Third, the C++ stdpar capability of ROCm™ is presented to highlight that the user can write pure C++ programs while still benefiting from GPU offload via compiler magic. Lastly, an example is given on how libraries can be used to offload the most relevant computation to the GPU without the need to write GPU kernel code at all.
All components mentioned in the talk are open-source, with most of the actual compiler work being done in the upstream llvm-project.
Speakers
Jan-Patrick Lehr |