Closed tjhd1475 closed 2 years ago
What do you mean by "it only uses one gpu"?
They will not merge into one excution. But they should generate at the same time.
You may try to use his script in his question: https://github.com/facebookresearch/fairseq/issues/4478
the CUDA_VISIBLE_DEVICES=1
should be on the same line than the fairseq-generate
otherwise it's not correctly passed to the subprocess.
@gmryu i mean two seperate fairseq-generate run on one gpu rather than two, #4478 is useful ,thanks!
@gwenzek it works! thanks a lot
❓ Questions and Help
What is your question?
how to do two fair-generate on two GPU separately?
Code
CUDA_VISIBLE_DEVICES=0 nohup fairseq-generate data-bin/wmt19_en_de_random_del \ --path checkpoints/fconv_wmt_en_de_2/checkpoint_best1.pt \ --num-workers 1 \ --scoring bert_score \ --source-lang en --target-lang de \ --results-path wmt19_en_de/data/trans_result/random_del/ \ --gen-subset train \ --beam 5 --remove-bpe &>/dev/null &
CUDA_VISIBLE_DEVICES=1 nohup fairseq-generate data-bin/wmt19_en_de_random_swap \ --path checkpoints/fconv_wmt_en_de_2/checkpoint_best1.pt \ --num-workers 1 \ --scoring bert_score \ --source-lang en --target-lang de \ --results-path wmt19_en_de/data/trans_result/random_swap/ \ --gen-subset train \ --beam 5 --remove-bpe &>/dev/null &
Tried
i tried to use two gpus with this script but it turned to use only one gpu .
What's your environment?
pip
, source): source