jiasenlu / vilbert_beta

473 stars 96 forks source link

Evaluation on multi-gpus #31

Open arjunakula opened 5 years ago

arjunakula commented 5 years ago

While evaluating on multiple gpus, we need to explicily add "-m torch.distributed.launch --nproc_per_node=1 --nnodes=1 --node_rank=0 " to the command. Otherwise, the evaluation takes a lot of time. (the evaluation script do not handle multiple gpus).

Example: Instead of running as "python eval_tasks.py ....", run as "python -m torch.distributed.launch --nproc_per_node=1 --nnodes=1 --node_rank=0 eval_tasks.py"