Closed stefanwebb closed 8 years ago
@stefanwebb GPU cannot be shared. If you really want to do this, you may be able to construct several different GPUBackend and run different network on different backend. But I doubt you will get performance improvement by running them in "parallel", unless you have multiple GPU cards.
Thanks for the quick reply. We're testing an alternative learning algorithm actually, that requires calculating the gradients on several different parameter sets in parallel.
I think there may be a bug preventing you from constructing and running several different GPU backends and using them because it crashes. I will post a simple example to illustrate here shortly.
Closing for now. Feel free to re-open if needed.
Hi, I have a model where we need to calculate the gradient of several neural networks in parallel. We do this using a number of @async blocks, and I have tried initializing the backend from the main process as well as initializing it in all processes.
This seems to work using the CPUBackend(), but when I switch to GPUBackend() I get a lot of errors and the program crashes. Is multithreading supported for the GPU backend?