Open A-transformer opened 2 months ago
actually, the output of build is 'libvgpu.so', you can mount that '.so' into any container with LD_PRELOAD set, and it will support things like
export CUDA_DEVICE_MEMORY_LIMIT=1g
export CUDA_DEVICE_SM_LIMIT=50
Yes, it's working on both container and VM environments. Is it possible to set CUDA_DEVICE_MEMORY_LIMIT and CUDA_DEVICE_SM_LIMIT dynamically? Once the process starts, the environment variables become fixed. Can these values be changed dynamically or reloaded from a ConfigMap or configuration server?
Yes, it's working on both container and VM environments. Is it possible to set CUDA_DEVICE_MEMORY_LIMIT and CUDA_DEVICE_SM_LIMIT dynamically? Once the process starts, the environment variables become fixed. Can these values be changed dynamically or reloaded from a ConfigMap or configuration server?
We haven't implemented that feature, but technically, you can modify values in vGPUmonitor, once the '.cache' file in mmaped, it will update device memory limit and utilization limit to the corresponding container.
"After building the Docker image, we copy the build package into the target image. Does this mean that only our contain build package image will support the following features? on export CUDA_DEVICE_MEMORY_LIMIT=1g export CUDA_DEVICE_SM_LIMIT=50