Closed jaeehooon closed 2 years ago
Hi, thanks for your interest. You are right, the command can be used if you train with a single GPU.
For example, CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 train.py
should work fine.
Hi, thanks for your interest. You are right, the command can be used if you train with a single GPU. For example,
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 train.py
should work fine.
Hi,Is this setting in engine.py correct?
Hello. I appreciate for me to making this code available!
I would like to ask a question about using 'a single GPU for training'. Is this command only for training with a single one??
python train.py -d 0
??In this command that you proposed, I don't know how to train with a single one.
$ CUDA_VISIBLE_DEVICES="GPU IDs" python -m torch.distributed.launch --nproc_per_node="GPU numbers you want to use" train.py
Is it correct with this one?
$ CUDA_VISIBLE_DEVICES="0" python -m torch.distributed.launch --nproc_per_node="1" train.py
??Even
CUDA_VISIBLE_DEVICES
is not working on Ubuntu!Thank you. Best Regards