tobegit3hub / simple_tensorflow_serving

Generic and easy-to-use serving service for machine learning models
https://stfs.readthedocs.io
Apache License 2.0
757 stars 193 forks source link

There are great trouble when I use multiple gpu device!!! #31

Closed Johnson-yue closed 6 years ago

Johnson-yue commented 6 years ago

When I use your docker image in only one gpu machine, everything is ok. But when I use it in my four gpu machine, the gpu configure option like: "log_device_placement": true,, "allow_soft_placement": true,, "allow_growth": true,, "per_process_gpu_memory_fraction": 0.5, those not work.

ALL gpu memory of four 1080ti have been allocate. So I can not do anything ! Is not right!

please you check in multiple gpus , thank you

tobegit3hub commented 6 years ago

It is the problem of how you use GPUs in docker images.

If you run the script directly in the server, you can set CUDA_VISIBLE_DEVICES=0 to use only the first GPU.

If you run with docker image, you should not use export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') which will mount all GPU devices by default. Checkout which GPU you want to use and mount that one and replace the environment variables of DEVICES.