Shathe / SemiSeg-Contrastive

Apache License 2.0
87 stars 13 forks source link

Report some bugs #1

Closed lliuz closed 2 years ago

lliuz commented 2 years ago

Hi @Shathe, nice work but I find some bugs:

  1. The entropy loss compute in a wrong way: https://github.com/Shathe/SemiSeg-Contrastive/blob/381e182579d286185816356085a497e6d7f14ac5/trainSSL.py#L590-L591 https://github.com/Shathe/SemiSeg-Contrastive/blob/381e182579d286185816356085a497e6d7f14ac5/trainSSL.py#L41-L43 valid_mask has shape of [B, 1, H, W], while loss_image in L42 has shape of [B, H, W], so the multiply will broadcast in a wrong way and results a tensor in L43 with [B, B, H, W].

  2. Some parameters are optimized multiple times in a iteration. https://github.com/Shathe/SemiSeg-Contrastive/blob/381e182579d286185816356085a497e6d7f14ac5/model/deeplabv3.py#L336-L342 The .modules() function returns all modules recursively, and .parameters() returns the params in the modules, so the parameters in modules will be optimized multiple times.

Despite these problems, I successfully reproduced your work. However, fixing the above bugs may need to adjust the hyperparameters, I hope you can update the new hyperparameters and results if possible.

Shathe commented 2 years ago

Hi!

First of all thanks for the feedback, thanks a lot!!!!

You are right. Regarding the parameters, they were added x3, so for the same behaviour, increasing the learning rate x3 would be the same as before, although with new setting it will still works as the number of iterations is actually high.

Once I re-run the experiments and see what is better I will give an update. I already corrected the things you point out, thanks a lot again!