tobegit3hub / simple_tensorflow_serving

Generic and easy-to-use serving service for machine learning models
https://stfs.readthedocs.io
Apache License 2.0
757 stars 193 forks source link

a bug for model_config_file #32

Open Johnson-yue opened 6 years ago

Johnson-yue commented 6 years ago

Hi , I fixed just mount specified GPU device , and I have simple way to use gpu like: docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES='0,1' --rm -it -p 8500:8500 tobegit3hub/simple_tensorflow_serving:latest-gpu this method can runing your docker image with gpu:0 and gpu:1. [THE BUG is HERE]: When I run your docker image in only one gpu device machine, it worked well, I can control usage of gpu memory with json file like /models/example/tensorflow_gpu_config.json. every flag such as "per_process_gpu_memory_fraction": 0.5 can be worked .

But when I run the same docker image with the same way and the same _tensorflow_gpuconfig.json file in machine with four gpu devices, even though I just mount only one gpu devices. It did not work. The flag such as "per_process_gpu_memory_fraction": 0.5 I used , but the code was still use full gpu memory !! image

Did you check your docker in multiply gpu devices?

tobegit3hub commented 5 years ago

Thanks for reporting.

How do you "mount only one gpu devices"? If you run docker container with -e NVIDIA_VISIBLE_DEVICES='0', I think the container can only use one GPU devices.

Johnson-yue commented 5 years ago

@tobegit3hub yes, I'm using -e NVIDIA_VISIBLE_DEVICES='0' in docker ,so the container can only use one GPU device , and it work。My problem is,when I only use just one GPU device in the docker, your config file does not work, the TF use GPU memory fully!!Do you check it?