Open beniz opened 7 years ago
Are there performance optimisations for CPU Only ? Intel or ARM, I see a spike on all 8 cores when predicting
Caffe and TF backend are optimized for CPU, with parallel operations using all cores. It is always best to use batches.
I don't think sharing 1 GPU between multiple applications is generally a good idea. Given TF takes control of the device global memory allocation, and all other resources that requires exclusive access, it is better to share your GPU in some other ways, for example, time-sharing :P
Tensorflow running call conflicts on Cuda initialization context with that of other libraries, see:
Tensorflow devs indicate that there's no solving of this in the roadmap.
The
StreamExecutor context
issue is confirmed with DeepDetect when using Tensorflow and Caffe services on the same GPU. More qualification expected in the future.Current solution: build Tensorflow backend to support CPU only (i.e. with no GPU support built-in).