Brussels / 31 January & 1 February 2026

schedule

One GPU, Many Models: What Works and What Segfaults


Serving multiple models on a single GPU sounds great until something segfaults.

Two approaches dominate for parallel inference: MIG (hardware partitioning) and MPS (software sharing). Both promise efficient GPU sharing.

I tested both strategies for running different AI workloads in parallel.

This talk digs into what actually happened: where things worked, where memory isolation fell apart, which configs crashed, and what survives under load.

By the end, you'll know:

  1. How to utilize unused GPU capacity.
  2. How to setup MIG and MPS.
  3. How MIG and MPS behave under load.
  4. Memory issues, crashes, and failures.
  5. Which config is suited best for your AI workload.

Speakers

Photo of YASH PANCHAL YASH PANCHAL

Links