ChihebTrabelsi / deep_complex_networks

Implementation related to the Deep Complex Networks
MIT License
725 stars 280 forks source link

Multiple CPUs? #13

Open thanhkien84 opened 6 years ago

thanhkien84 commented 6 years ago

Is there any way to run the code on multiple CPUs or GPUs?

obilaniu commented 6 years ago

The code will run on multiple CPUs provided you've linked it with the multithreaded builds of BLAS libraries like MKL or OpenBLAS, and have not undertaken steps to disable multithreading (such as through the setting of certain OpenMP environment variables).

There is no support at present for multiple GPUs, and we do not foresee adding it to the current Keras+Theano code because Theano itself is being sunset.

A future rewrite of this codebase to another framework may possibly support multiple GPUs.

austinmw commented 6 years ago

I'm trying out the demo and started training. It took about 4 hours to complete 1 epoch. Can I stop/restart training and test at any time? By default it's set to run for 200 epochs which for my system would be ~1 month straight. Did you find this amount of training is needed for good performance on musicnet?

obilaniu commented 6 years ago

@austinmw

A runtime on the order of 4 hours is consistent with running on CPU. With gpuarray 0.7.5, a P100 GPU and THEANO_FLAGS="mode=FAST_RUN,device=cuda,floatX=float32,gpuarray.preallocate=1", Musicnet epochs take on the order of 100-150 seconds each.

I did not code the Musicnet aspect, so I can't vouch for its resumability.

austinmw commented 6 years ago

Thanks for the suggestion. Do you use latest theano/libgpuarray/pygpu releases?

obilaniu commented 6 years ago

@austinmw Approximately the setup for Musicnet: Theano 1.0.1 libgpuarray/pygpu 0.7.5 (they're from the same repo) cuDNN 6.0.21 CUDA Toolkit 8.0 (I believe).

Those are the newest Theano/libgpuarray releases and somewhat old cuDNN/CUDA libraries. Still works very well.

austinmw commented 6 years ago

Thanks, had to edit the .theanorc with cuda path, but got it working with CUDA 9.0!