Closed gauravjain14 closed 4 years ago
Sorry, training on GPUs is not currently pending. I'd be surprised if you got a substantial speedup with GPUs for the typical networks trained with Spinning Up implementations, though (MLPs of size ~(256, 256)).
Okay. I was asking for benchmarking the exact same thing that you mentioned.
Thanks
I'm not sure if I am missing something, but doing a quick grep and checking for my GPU usage, I understand that the PyTorch doesn't have any flag that allows us to train the models on GPUs. Is this a feature in pending or MPI is the only supported parallelization for the PyTorch port?