I am facing an issue regarding the deployment of an application on a K8S node equipped with two GPU cards. My objective is to utilize both GPUs effectively with the help of nvshare. In my application, I employ a single Docker image, wherein programmatically, I run two threads, each allocated to a specific GPU using the cudaSetDevice function. We set NVIDIA_VISIBLE_DEVICES=all in the Docker environment to ensure access to both GPUs within the container.
To my understanding, the current version does not support multi-GPU functionality. Furthermore, I have attempted to address this issue by creating a pull request (PR), but it seems that my approach may not be the correct solution.
I am facing an issue regarding the deployment of an application on a K8S node equipped with two GPU cards. My objective is to utilize both GPUs effectively with the help of nvshare. In my application, I employ a single Docker image, wherein programmatically, I run two threads, each allocated to a specific GPU using the
cudaSetDevice
function. We setNVIDIA_VISIBLE_DEVICES=all
in the Docker environment to ensure access to both GPUs within the container.To my understanding, the current version does not support multi-GPU functionality. Furthermore, I have attempted to address this issue by creating a pull request (PR), but it seems that my approach may not be the correct solution.
Could someone please assist me with this matter?