hi, 我使用双卡运行模型并行,train_model_all.py
报错:
ValueError: DistributedDataParallel device_ids and output_device arguments only work with single-device/Multi-devie gpu modules, but got device_ids[0],output_device 0,and module parameter {device(type='cuda',index=0),device(type='cuda',index=1)
hi, 我使用双卡运行模型并行,train_model_all.py
报错: ValueError: DistributedDataParallel device_ids and output_device arguments only work with single-device/Multi-devie gpu modules, but got device_ids[0],output_device 0,and module parameter {device(type='cuda',index=0),device(type='cuda',index=1)