pluskid / Mocha.jl

Deep Learning framework for Julia
Other
1.29k stars 254 forks source link

Not able to run several models in parallel on the GPUBackend() #121

Closed stefanwebb closed 8 years ago

stefanwebb commented 9 years ago

Hi, I have a model where we need to calculate the gradient of several neural networks in parallel. We do this using a number of @async blocks, and I have tried initializing the backend from the main process as well as initializing it in all processes.

This seems to work using the CPUBackend(), but when I switch to GPUBackend() I get a lot of errors and the program crashes. Is multithreading supported for the GPU backend?

mocha gpubackend error

pluskid commented 9 years ago

@stefanwebb GPU cannot be shared. If you really want to do this, you may be able to construct several different GPUBackend and run different network on different backend. But I doubt you will get performance improvement by running them in "parallel", unless you have multiple GPU cards.

stefanwebb commented 9 years ago

Thanks for the quick reply. We're testing an alternative learning algorithm actually, that requires calculating the gradients on several different parameter sets in parallel.

I think there may be a bug preventing you from constructing and running several different GPU backends and using them because it crashes. I will post a simple example to illustrate here shortly.

pluskid commented 8 years ago

Closing for now. Feel free to re-open if needed.