silversparro / wav2letter.pytorch

A fully convolution-network for speech-to-text, built on pytorch.
MIT License
126 stars 23 forks source link

Train on multiple GPUs on a single host EC2 instance #9

Closed ronanki closed 5 years ago

ronanki commented 5 years ago

Do I need to provide any additional arguments to run train.py on multiple GPUs? I see there are some flags for distributed training -- but I'm just focussing on running on multiple GPUs in a single host machine.

Currently, it's training on a single GPU even though the host has multiple GPUs.

vaibhav0195 commented 5 years ago

Hi @ronanki I havent updated the code to support multi-gpu training yet.

I will add the multi GPU support ASAP.

EDIT : I have updated the readme with the instruction on how to run in multi gpu configration. Try with the new instructions . Thanks.

ronanki commented 5 years ago

Thank you very much.

It worked after making a small change in distributed.py following this issue:

param._execution_engine.queue_callback(allreduce_params)

replaced with

Variable._execution_engine.queue_callback(allreduce_params)