lhoyer / DAFormer

[CVPR22] Official Implementation of DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation
Other
466 stars 92 forks source link

Multi GPU training #9

Closed phmalek closed 2 years ago

phmalek commented 2 years ago

How to train on mutliple GPUs? using n_gpus=2 in the config file and setting up the CUDA_VISIBLE_DEVICES=0,1 didn't work for me. The training is still on the 0th GPU.

Thanks

lhoyer commented 2 years ago

Multi-GPU training is not supported by this repository. The variable n_gpus has no functionality and is only a relict, which I have forgotten to remove. You could refer to the original mmsegmentation framework if you want to integrate Multi-GPU training. However, you would also have to adapt the UDA procedure, which is not straightforward. Some potentially useful information is provided here: https://mmgeneration.readthedocs.io/en/latest/tutorials/ddp_train_gans.html

fuweifu-vtoo commented 2 years ago

why not support Multi-GPU? is it not necessary for UDA?

fuweifu-vtoo commented 2 years ago

Have you read code for SoftTeacher? This code seems to have nothing to do with ddp_train_gans, but it still use multi-gpu training UDA model.