NVlabs / DG-Net

:couple: Joint Discriminative and Generative Learning for Person Re-identification. CVPR'19 (Oral) :couple:
https://www.zdzheng.xyz/publication/Joint-di2019
Other
1.28k stars 228 forks source link

multi-gpu training #5

Open tau-yihouxiang opened 5 years ago

tau-yihouxiang commented 5 years ago

I checked that torch.nn.DataParallel has been used. I wonder why multi-gpu model won't work. Thanks in advance.

layumi commented 5 years ago

trainer is a high-level container. We need to specify the leaf models in trainer.

if num_gpu>1:
    #trainer.teacher_model = torch.nn.DataParallel(trainer.teacher_model, gpu_ids)
    trainer.id_a = torch.nn.DataParallel(trainer.id_a, gpu_ids)
    trainer.gen_a.enc_content = torch.nn.DataParallel(trainer.gen_a.enc_content, gpu_ids)
    trainer.gen_a.mlp_w1 = torch.nn.DataParallel(trainer.gen_a.mlp_w1, gpu_ids)
    trainer.gen_a.mlp_w2 = torch.nn.DataParallel(trainer.gen_a.mlp_w2, gpu_ids)
    trainer.gen_a.mlp_w3 = torch.nn.DataParallel(trainer.gen_a.mlp_w3, gpu_ids)
    trainer.gen_a.mlp_w4 = torch.nn.DataParallel(trainer.gen_a.mlp_w4, gpu_ids)
    trainer.gen_a.mlp_b1 = torch.nn.DataParallel(trainer.gen_a.mlp_b1, gpu_ids)
    trainer.gen_a.mlp_b2 = torch.nn.DataParallel(trainer.gen_a.mlp_b2, gpu_ids)
    trainer.gen_a.mlp_b3 = torch.nn.DataParallel(trainer.gen_a.mlp_b3, gpu_ids)
    trainer.gen_a.mlp_b4 = torch.nn.DataParallel(trainer.gen_a.mlp_b4, gpu_ids)
    for dis_model in trainer.dis_a.cnns:
        dis_model = torch.nn.DataParallel(dis_model, gpu_ids)

This code works on multiple GPUs. You may have a try. Note that you also need to modify the code about saving model to save model.module.

However, it is not the best solution. We still work on this. You might notice I did not include the decoder.

    trainer.gen_a.dec = torch.nn.DataParallel(trainer.gen_a.dec, gpu_ids)

It is due to the adaptive instance normalisation, which can not be duplicated on multi-gpu.

tau-yihouxiang commented 5 years ago

Thank you! This is really helpful.

Phi-C commented 5 years ago

trainer is a high-level container. We need to specify the leaf models in trainer.

if num_gpu>1:
    trainer.teacher_model = torch.nn.DataParallel(trainer.teacher_model, gpu_ids)
    trainer.id_a = torch.nn.DataParallel(trainer.id_a, gpu_ids)
    trainer.gen_a.enc_content = torch.nn.DataParallel(trainer.gen_a.enc_content, gpu_ids)
    trainer.gen_a.mlp_w1 = torch.nn.DataParallel(trainer.gen_a.mlp_w1, gpu_ids)
    trainer.gen_a.mlp_w2 = torch.nn.DataParallel(trainer.gen_a.mlp_w2, gpu_ids)
    trainer.gen_a.mlp_w3 = torch.nn.DataParallel(trainer.gen_a.mlp_w3, gpu_ids)
    trainer.gen_a.mlp_w4 = torch.nn.DataParallel(trainer.gen_a.mlp_w4, gpu_ids)
    trainer.gen_a.mlp_b1 = torch.nn.DataParallel(trainer.gen_a.mlp_b1, gpu_ids)
    trainer.gen_a.mlp_b2 = torch.nn.DataParallel(trainer.gen_a.mlp_b2, gpu_ids)
    trainer.gen_a.mlp_b3 = torch.nn.DataParallel(trainer.gen_a.mlp_b3, gpu_ids)
    trainer.gen_a.mlp_b4 = torch.nn.DataParallel(trainer.gen_a.mlp_b4, gpu_ids)
    for dis_model in trainer.dis_a.cnns:
        dis_model = torch.nn.DataParallel(dis_model, gpu_ids)

This code works on multiple GPUs. You may have a try. Note that you also need to modify the code about saving model to save model.module.

However, it is not the best solution. We still work on this. You might notice I did not include the decoder.

    trainer.gen_a.dec = torch.nn.DataParallel(trainer.gen_a.dec, gpu_ids)

It is due to the adaptive instance normalisation, which can not be duplicated on multi-gpu.

I notice that F.batch_norm() is used in class AdaptiveInstanceNorm2d, is it the reason?

layumi commented 5 years ago

Hi @ChenXingjian

Not really. It is due to the value of w and b in adaptive instance normalisation layer. https://github.com/NVlabs/DG-Net/blob/master/networks.py#L822-L823

We access the w and b on the fly, and use assign_adain_params to obtain the current parameters. https://github.com/NVlabs/DG-Net/blob/master/networks.py#L236

For pytorch DataParallel, it splits the batch into several parts and duplicates the network into all gpus, which does not match the size of w and b.

For example, we use the min-batch of 8 samples and have two gpus. The input of each GPU is 4 samples. But the w and b is 8, since they are duplicated from the original full model.

Phi-C commented 5 years ago

@layumi Thank you, it's really helpful. But any reference to modify the code?

layumi commented 5 years ago

Hi @ChenXingjian I am working on it and checking the results. If everything goes well, I will upload the code in the next week.

FreemanG commented 5 years ago

It seems it works with multi-GPUs when you put the "assign_adain_params" function into the "Decoder" class.

layumi commented 5 years ago

@FreemanG Yes. You are right. We could copy two encoder+decoder as one function at the beginning, so there will not be any problem about mismatched dimension.

In fact, I have written the code, and I am checking the result before I release it.

layumi commented 5 years ago

Dear all,

I just added the support for multi-gpu training. You are welcomed to check out it.

You still need two 10G+ GPUs for now. I have not written the support for fp16 with multiple-gpu. (I will consider to support it in the near future.)

Some losses are still calculated on the first GPU, so the memory usage of the first gpu is larger than the second gpu.

The main reason is that copy.deepcopy now not supports for multi-gpu. So for some losses and forward functions, I still keep them running on the first GPU.

I tested it on my two P6000 (the speed is close to GTX1080)

Single GPU takes about 1.1s for one iteration at the beginning.

Two GPUs take about 0.9s for one iteration at the beginning.

(Since we add the teacher model calculation at the 30000th iteration, the speed will slow down after the 30000th iteration.)

FreemanG commented 5 years ago

Great :+1:

ramondoo commented 5 years ago

is it possible to use nn.parallel.replicate instead of deepcopy?

Xt-Chen commented 4 years ago

@FreemanG Yes. You are right. We could copy two encoder+decoder as one function at the beginning, so there will not be any problem about mismatched dimension.

In fact, I have written the code, and I am checking the result before I release it.

Hi, Thank you very much for implementing the multi-GPU training version,may I ask where the method you mentioned (copy two encoder+decoder as one function at the beginning) is reflected in the code, I did not find it in your latest version. Thank you very much.

qasymjomart commented 3 years ago

Hi @layumi, if you are still here, please elaborate one more time on how you could make the adaptive instance normalization layer work in a multi-GPU mode with nn.DataParallel. I looked through the code and version history, but I didn't see any substantial amendments compared to the first commit.

Thank you