sebastian-hofstaetter / matchmaker

Training & evaluation library for text-based neural re-ranking and dense retrieval models built with PyTorch
https://neural-ir-explorer.ec.tuwien.ac.at/
Apache License 2.0
259 stars 30 forks source link

why use DataParallel rather than DistributedDataParallel to do multi-GPU training #12

Closed haiahaiah closed 3 years ago

haiahaiah commented 3 years ago

Hi, I wanna ask why you chose DataParallel rather than DistributedDataParallel to do multi-GPU. Because as far as I know DistributedDataParallel would be more efficient than DataParallel