sylabs / singularity

SingularityCE is the Community Edition of Singularity, an open source container platform designed to be simple, fast, and secure.
https://sylabs.io/docs/
Other
743 stars 96 forks source link

Consider adding CDI --device support to the singularity native runtime #1395

Open dtrudg opened 1 year ago

dtrudg commented 1 year ago

Is your feature request related to a problem? Please describe.

SingularityCE doesn't currently support the new CDI standard for making hardware devices available in containers.

https://github.com/container-orchestrated-devices/container-device-interface

The native singularity runtime can currently:

The --nv and --rocm naive binding approach cannot support a range of functionality that is valuable, such as masking specific GPUs, introducing only subsets of device functionality into a container etc.

The --nvccli approach places trust in the vendor nvidia-container-cli tool. In addition, NVIDIA are moving to CDI as the preferred method for container setup, so continuing to rely on nvidia-container-cli for direct non-CDI container setup may result in lack of support for future GPU features.

The existing mechanisms are vendor specific, but we'd like to support e.g. Intel GPUs https://github.com/sylabs/singularity/issues/1094 without having to add more vendor specific code / flags.

Describe the solution you'd like

Additional context

We are committed to adding CDI support to the --oci runtime mode - #1394

As the main focus for SingularityCE 4.0 development is the --oci mode, we might wish to avoid large changes to the native runtime in this cycle, unless there is compelling support for them from users. We need to gauge interest in CDI. Are users interested in using CDI also likely to move to the 4.0 oci mode, or are they interested in CDI support in the native singularity runtime mode?

There is reluctance among some users to add additional tooling, and have to manage system configuration, for GPUs on systems. This is particularly the case where a cluster containing heterogeneous GPU hardware (between nodes) is in operation. While singularity's --nv and --rocm naive binding are simple, and don't offer e.g. GPU masking etc, no node-specific configuration is necessary beyond driver installation. We should be conscious not to break this if we switch to a CDI approach for --nv / --rocm.

ArangoGutierrez commented 1 year ago

++