Hi,
Is it possible for you to share the versions of the codes training with several GPUs in parallel?
I was able to train the NS network using just one GPU, but I've been having several problems with dimensionality issues when adapting the code to train sub batches in parallel.
Also tips to adapt the code to train in parallel are welcome.
Hi, Is it possible for you to share the versions of the codes training with several GPUs in parallel?
I was able to train the NS network using just one GPU, but I've been having several problems with dimensionality issues when adapting the code to train sub batches in parallel.
Also tips to adapt the code to train in parallel are welcome.
Regards!