Brussels / 1 & 2 February 2025

schedule

Building a new GGML backend: How, Challenges and Oppertunities with Novel Accelerators


llama.cpp/GGML is a popular piece of software to run (mostly) large language models. It has support for common consumer and enterprise hardware like NVIDIA, AMD and Intel GPUs. But what if you want to onboarding new accelerators? Say a new architecture that promises to reduce power by a few fold. This talk aims to share the experience and knowledge learned building a (work in progress) GGML backend for Tenstorrent's Grayskull and Wormhole AI processor. And what's like to work with a brand new software stack.

Source code: https://github.com/marty1885/llama.cpp/tree/metalium-support/ Documentation: https://github.com/marty1885/llama.cpp/blob/metalium-support/docs/backend/Metalium.md

Speakers

Photo of Martin Chang Martin Chang