Closed chloefresh closed 2 years ago
I’d like to finetune scibert in multiple GPUs, not just one. But
export CUDA_DEVICE=0
in _train_allennlplocal.sh means only gpu0 is used. So where do I need to modify to utilize multiple GPUs?
Multiple gpu finetuning has been implemented in the codes using DataParallel
from torch
.
Maybe you can change export CUDA_DEVICE=0
to export CUDA_DEVICE=0,1,2,3
and you should change the way parsing this argument cuda_device
in config files.
I’d like to finetune scibert in multiple GPUs, not just one. But
export CUDA_DEVICE=0
in _train_allennlplocal.sh means only gpu0 is used. So where do I need to modify to utilize multiple GPUs?