Brussels / 3 & 4 February 2024


libamicontained: a low-level library for reasoning about resource restriction

A common question language runtimes have is: how many resources do I have access to? They want to know e.g. how many threads they can run in parallel for their threadpool size, or the number of thread-local memory arenas, etc.

The kernel offers many endpoints to query this information. There is /proc/cpuinfo, /proc/stat, sched_getaffinity(), sysinfo(), the cpuset cgroup hierarchy's cpuset.cpus.effective, the isolcpus kernel command line parameter, /sys/devices/system/cpu/online. Further, libcs offer divergent implementations of sysconf(_SC_NPROCESORS_ONLIN). As a bonus, the kernel scheduler may be configured to limit resources using cpu "shares" or cpu quotas, so a task may be able to run on all cores, but have some kind of rate limit that is not reflected in the physical cores the system is allowed to run on.

In this talk, we propose a new library "libamicontained" to offer one place to consolidate the logic for the answer to this question. We propose a:

  • C ABI exporting
  • statically linked
  • zero dependency

library which is aware of all of these different runtime configurations and answers questions about cpu counts etc. in accordingly reasonable ways.

Of course, the real challenge here is adoption. Ideally we can pitch such a library as "from the container people", so it's an easier pitch to language runtimes. We are here seeking feedback on all points (heuristics to reason about CPU counts, design goals, etc.) from container people as a first step.

[2]: as of this writing, the jvm still uses cpuset.cpus instead of cpuset.cpus.effective on cgroupv2: [3]: ,


Photo of Tycho Andersen Tycho Andersen
Photo of Sebastien Dabdoub Sebastien Dabdoub