Closed qianlin404 closed 5 years ago
@qianlin404 Can you please take a look at this issue and let me know if that answers your question. Thanks!
hello i try to run serving in mps and refer to [(https://github.com/NVIDIA/nvidia-docker/issues/419)] you should first install nvidia-docker2 and follow steps in this issue
Hi @tomandjerrygit , thanks for referring. It sounds like this is an IPC problem. The MPS is running on host and the program running inside the docker container cannot communicate with it. By setting --ipc=host
, it works properly.
new b
System information
Describe the problem
I am experimenting with Tensorflow Serving GPU and NVIDIA Multi-Process Server. I run TF Serving with Docker
tensorflow/serving:latest-gpu
. Everything works properly when MPS is disable. However, when I enable MPS usingsudo nvidia-cuda-mps-control -d
and run TF Serving, I got the following error:Exact Steps to Reproduce
I run my experiment in AWS Deep Learning AMI (Ubuntu) Version 22.0 with instance type
p3.2xlarge
. GPU information is as follow:Source code / logs
The code I use to run TF Serving is as follow:
The code I use to enable MPS is: