Project-HAMi / HAMi-core

HAMi-core compiles libvgpu.so, which ensures hard limit on GPU in container
92 stars 43 forks source link

image usage #21

Open A-transformer opened 2 months ago

A-transformer commented 2 months ago

"After building the Docker image, we copy the build package into the target image. Does this mean that only our contain build package image will support the following features? on export CUDA_DEVICE_MEMORY_LIMIT=1g export CUDA_DEVICE_SM_LIMIT=50

archlitchi commented 1 month ago

actually, the output of build is 'libvgpu.so', you can mount that '.so' into any container with LD_PRELOAD set, and it will support things like

export CUDA_DEVICE_MEMORY_LIMIT=1g
export CUDA_DEVICE_SM_LIMIT=50
A-transformer commented 1 month ago

Yes, it's working on both container and VM environments. Is it possible to set CUDA_DEVICE_MEMORY_LIMIT and CUDA_DEVICE_SM_LIMIT dynamically? Once the process starts, the environment variables become fixed. Can these values be changed dynamically or reloaded from a ConfigMap or configuration server?

archlitchi commented 2 weeks ago

Yes, it's working on both container and VM environments. Is it possible to set CUDA_DEVICE_MEMORY_LIMIT and CUDA_DEVICE_SM_LIMIT dynamically? Once the process starts, the environment variables become fixed. Can these values be changed dynamically or reloaded from a ConfigMap or configuration server?

We haven't implemented that feature, but technically, you can modify values in vGPUmonitor, once the '.cache' file in mmaped, it will update device memory limit and utilization limit to the corresponding container.