jolibrain / deepdetect

Deep Learning API and Server in C++14 support for PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
https://www.deepdetect.com/
Other
2.52k stars 561 forks source link

Unsafe use of Tensorflow and Caffe on same DeepDetect server / GPU #230

Open beniz opened 7 years ago

beniz commented 7 years ago

Tensorflow running call conflicts on Cuda initialization context with that of other libraries, see:

Tensorflow devs indicate that there's no solving of this in the roadmap.

The StreamExecutor context issue is confirmed with DeepDetect when using Tensorflow and Caffe services on the same GPU. More qualification expected in the future.

Current solution: build Tensorflow backend to support CPU only (i.e. with no GPU support built-in).

abhiguru commented 7 years ago

Are there performance optimisations for CPU Only ? Intel or ARM, I see a spike on all 8 cores when predicting

beniz commented 7 years ago

Caffe and TF backend are optimized for CPU, with parallel operations using all cores. It is always best to use batches.

byronyi commented 7 years ago

I don't think sharing 1 GPU between multiple applications is generally a good idea. Given TF takes control of the device global memory allocation, and all other resources that requires exclusive access, it is better to share your GPU in some other ways, for example, time-sharing :P