Closed ronanki closed 5 years ago
Hi @ronanki I havent updated the code to support multi-gpu training yet.
I will add the multi GPU support ASAP.
EDIT : I have updated the readme with the instruction on how to run in multi gpu configration. Try with the new instructions . Thanks.
Thank you very much.
It worked after making a small change in distributed.py following this issue:
param._execution_engine.queue_callback(allreduce_params)
replaced with
Variable._execution_engine.queue_callback(allreduce_params)
Do I need to provide any additional arguments to run train.py on multiple GPUs? I see there are some flags for distributed training -- but I'm just focussing on running on multiple GPUs in a single host machine.
Currently, it's training on a single GPU even though the host has multiple GPUs.