deephealthproject / eddl

European Distributed Deep Learning (EDDL) library. A general-purpose library initially developed to cover deep learning needs in healthcare use cases within the DeepHealth project.
https://deephealthproject.github.io/eddl/
MIT License
34 stars 10 forks source link

eddl does not respect number of threads in CS_CPU #148

Closed Andrea-Oliveri closed 4 years ago

Andrea-Oliveri commented 4 years ago

Good morning, as stated in the title, eddl does not respect (anymore?) the number of threads written in CS_CPU. More precisely, when using CS_CPU(1, "full_mem") with both eddl and pyeddl I have observed with top the processes use the CPU in a very inconsistent way, jumping anywhere between using 20 and 30 threads. A proof of concept of this issue can be achieved by taking the use_case_pipeline https://github.com/deephealthproject/use_case_pipeline/tree/master/src and, after successfully compiling it for CPU and modifying src/mnist_batch so that it uses CS_CPU(1, "full_mem") trying to run the training. I have experienced this strange behaviour also when using pyeddl to train a custom network.

RParedesPalacios commented 4 years ago

Yes, right now despite the specification of number of threads in Eigen and OMP it seems that the number of threads is always the maximum available. To fix this perhaps we have to add in every amp pragma the limitation and hope that Eigen is respecting the num_threads set.

We will leave that for future.