user0407 / CLUDA

Implementation of CLUDA: Contrastive learning in Unsupervised Domian Adaptation in Semantic Segmentation
22 stars 2 forks source link

Bad performance of DAFormer + CLUDA, please release related codes #5

Open super233 opened 1 year ago

super233 commented 1 year ago

Hi, thanks for your awesome code.

I noticed that the released code is disigned for HRDA, could you please provide the code for DAFormer, especially for dacs.py

super233 commented 1 year ago

@user0407 ?

super233 commented 1 year ago

I have tried to reproduce “DAFormer + CLUDA” with contrastive_loss in mmseg/models/losses/contrastive_loss.py, however the best performance of GTA2Cityscapes was only 67.88, which was worse than the DAFormer baseline.

There is the reproduced code, could you please check it? dacs_daformer.zip

user0407 commented 1 year ago

For how many iterations are you running?

On Thu, Jan 26, 2023, 11:16 AM Wenqi Tang @.***> wrote:

I have tried to reproduce “DAFormer + CLUDA” with contrastive_loss in mmseg/models/losses/contrastive_loss.py, however the best performance of GTA2Cityscapes was only 67.88, which was worse than the DAFormer baseline.

There is the reproduced code, could you please check it? dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip

— Reply to this email directly, view it on GitHub https://github.com/user0407/CLUDA/issues/5#issuecomment-1404586283, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.***>

super233 commented 1 year ago

For how many iterations are you running? On Thu, Jan 26, 2023, 11:16 AM Wenqi Tang @.> wrote: I have tried to reproduce “DAFormer + CLUDA” with contrastive_loss in mmseg/models/losses/contrastive_loss.py, however the best performance of GTA2Cityscapes was only 67.88, which was worse than the DAFormer baseline. There is the reproduced code, could you please check it? dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip — Reply to this email directly, view it on GitHub <#5 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.>

40000 iterations.

user0407 commented 1 year ago

Please run it for 80k iterations. The results reported in the paper are for 80k iterations. We found that contrastive losses takes longer to saturate and produce the desired results. Let me know if you still are not getting the results.

On Thu, Jan 26, 2023, 11:21 AM Wenqi Tang @.***> wrote:

For how many iterations are you running? … <#m-3183195904676458368> On Thu, Jan 26, 2023, 11:16 AM Wenqi Tang @.> wrote: I have tried to reproduce “DAFormer + CLUDA” with contrastive_loss in mmseg/models/losses/contrastive_loss.py, however the best performance of GTA2Cityscapes was only 67.88, which was worse than the DAFormer baseline. There is the reproduced code, could you please check it? dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip — Reply to this email directly, view it on GitHub <#5 (comment) https://github.com/user0407/CLUDA/issues/5#issuecomment-1404586283>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.>

40000 iterations.

— Reply to this email directly, view it on GitHub https://github.com/user0407/CLUDA/issues/5#issuecomment-1404588304, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63E7255GOIS4MHAN7KLWUIGGJANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.***>

super233 commented 1 year ago

Please run it for 80k iterations. The results reported in the paper are for 80k iterations. We found that contrastive losses takes longer to saturate and produce the desired results. Let me know if you still are not getting the results.

OK, I will retry with 80k iterations. By the way, could you please share the code of "DAFormer + CLUDA", especially for dacs.py.

Thank you very much!

super233 commented 1 year ago

Please run it for 80k iterations. The results reported in the paper are for 80k iterations. We found that contrastive losses takes longer to saturate and produce the desired results. Let me know if you still are not getting the results. On Thu, Jan 26, 2023, 11:21 AM Wenqi Tang @.> wrote: For how many iterations are you running? … <#m-3183195904676458368> On Thu, Jan 26, 2023, 11:16 AM Wenqi Tang @.> wrote: I have tried to reproduce “DAFormer + CLUDA” with contrastive_loss in mmseg/models/losses/contrastive_loss.py, however the best performance of GTA2Cityscapes was only 67.88, which was worse than the DAFormer baseline. There is the reproduced code, could you please check it? dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip — Reply to this email directly, view it on GitHub <#5 (comment) <#5 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.> 40000 iterations. — Reply to this email directly, view it on GitHub <#5 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63E7255GOIS4MHAN7KLWUIGGJANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.>

Does "HRDA + CLUDA" also need be trained for 80k iters? I have run "HRDA + CLUDA" with your released code, however the best performance at the end of 40k iters training was only 73.37, which was worse than HRDA baseline.

user0407 commented 1 year ago

Yes

On Fri, Jan 27, 2023, 8:45 AM Wenqi Tang @.***> wrote:

Please run it for 80k iterations. The results reported in the paper are for 80k iterations. We found that contrastive losses takes longer to saturate and produce the desired results. Let me know if you still are not getting the results. … <#m4423658267278183710> On Thu, Jan 26, 2023, 11:21 AM Wenqi Tang @.*> wrote: For how many iterations are you running? … <#m-3183195904676458368> On Thu, Jan 26, 2023, 11:16 AM Wenqi Tang @.> wrote: I have tried to reproduce “DAFormer + CLUDA” with contrastive_loss in mmseg/models/losses/contrastive_loss.py, however the best performance of GTA2Cityscapes was only 67.88, which was worse than the DAFormer baseline. There is the reproduced code, could you please check it? dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip https://github.com/user0407/CLUDA/files/10506663/dacs_daformer.zip — Reply to this email directly, view it on GitHub <#5 https://github.com/user0407/CLUDA/issues/5 (comment) <#5 (comment) https://github.com/user0407/CLUDA/issues/5#issuecomment-1404586283>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63ARYP7XZ5L52XHI5CDWUIFUBANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.> 40000 iterations. — Reply to this email directly, view it on GitHub <#5 (comment) https://github.com/user0407/CLUDA/issues/5#issuecomment-1404588304>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63E7255GOIS4MHAN7KLWUIGGJANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63E7255GOIS4MHAN7KLWUIGGJANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.***>

Does "HRDA + CLUDA" also need be trained for 80k iters? I have run "HRDA + CLUDA" with your released code, however the best performance at the end of 40k iters training was only 73.37, which was worse than HRDA baseline.

— Reply to this email directly, view it on GitHub https://github.com/user0407/CLUDA/issues/5#issuecomment-1405966656, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63D7DYKIT44LFLKTOA3WUM4UFANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.***>

super233 commented 1 year ago

I'm sorry to bother you, I can not reproduced the performance of "DAFormer + CLUDA", could you please provide the related codes? I'm looking forward your reply!

super233 commented 1 year ago

I have tried many times, however the performances in your paper are still not reproduced, could you please provide the code for DAFormer?

user0407 commented 1 year ago

Hey,

I'm sorry i cannot provide the code right now. I will try to upload the training log. That has all the hyper-param settings. Check your configurations against that. This might take a few days as Im currently held up with other work.

Thankyou Midhun

On Sat, Jan 28, 2023, 2:12 PM Wenqi Tang @.***> wrote:

I'm sorry to bother you, I can not reproduced the performance of "DAFormer

  • CLUDA", could you please provide the related codes? I'm looking forward your reply!

— Reply to this email directly, view it on GitHub https://github.com/user0407/CLUDA/issues/5#issuecomment-1407342046, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63D62R5S6YHH6EWE353WUTLX7ANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.***>

super233 commented 1 year ago

Hey, I'm sorry i cannot provide the code right now. I will try to upload the training log. That has all the hyper-param settings. Check your configurations against that. This might take a few days as Im currently held up with other work. Thankyou Midhun On Sat, Jan 28, 2023, 2:12 PM Wenqi Tang @.> wrote: I'm sorry to bother you, I can not reproduced the performance of "DAFormer + CLUDA", could you please provide the related codes? I'm looking forward your reply! — Reply to this email directly, view it on GitHub <#5 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63D62R5S6YHH6EWE353WUTLX7ANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.>

That's really helpful for me. Thanks.

super233 commented 1 year ago

Hey, I'm sorry i cannot provide the code right now. I will try to upload the training log. That has all the hyper-param settings. Check your configurations against that. This might take a few days as Im currently held up with other work. Thankyou Midhun On Sat, Jan 28, 2023, 2:12 PM Wenqi Tang @.> wrote: I'm sorry to bother you, I can not reproduced the performance of "DAFormer + CLUDA", could you please provide the related codes? I'm looking forward your reply! — Reply to this email directly, view it on GitHub <#5 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63D62R5S6YHH6EWE353WUTLX7ANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.>

One more thing, when training with mmsegmentation, the codes will be automatically packed as code.tar.gz. If you can find the training log, then you should also find corresponding code.tar.gz.

user0407 commented 1 year ago

Thanks for pointing that out. I will upload that then.

On Wed, Feb 8, 2023, 1:32 PM Wenqi Tang @.***> wrote:

Hey, I'm sorry i cannot provide the code right now. I will try to upload the training log. That has all the hyper-param settings. Check your configurations against that. This might take a few days as Im currently held up with other work. Thankyou Midhun … <#m-8296958202816638943> On Sat, Jan 28, 2023, 2:12 PM Wenqi Tang @.> wrote: I'm sorry to bother you, I can not reproduced the performance of "DAFormer + CLUDA", could you please provide the related codes? I'm looking forward your reply! — Reply to this email directly, view it on GitHub <#5 (comment) https://github.com/user0407/CLUDA/issues/5#issuecomment-1407342046>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63D62R5S6YHH6EWE353WUTLX7ANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63D62R5S6YHH6EWE353WUTLX7ANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.>

One more thing, when training with mmsegmentation, the codes will be automatically packed as code.tar.gz. If you can find the training log, then you should also find corresponding code.tar.gz.

— Reply to this email directly, view it on GitHub https://github.com/user0407/CLUDA/issues/5#issuecomment-1422185920, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63HCTRQMNNGALOIPX6TWWNHJTANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.***>

super233 commented 1 year ago

For DAFormer + CLUDA, do you directly use the fused feature for contrastive learning? Have you use any projectors to reduce feature dimensions? And how do you set fm_size?

super233 commented 1 year ago

Thanks for pointing that out. I will upload that then. On Wed, Feb 8, 2023, 1:32 PM Wenqi Tang @.> wrote: Hey, I'm sorry i cannot provide the code right now. I will try to upload the training log. That has all the hyper-param settings. Check your configurations against that. This might take a few days as Im currently held up with other work. Thankyou Midhun … <#m-8296958202816638943> On Sat, Jan 28, 2023, 2:12 PM Wenqi Tang @.> wrote: I'm sorry to bother you, I can not reproduced the performance of "DAFormer + CLUDA", could you please provide the related codes? I'm looking forward your reply! — Reply to this email directly, view it on GitHub <#5 (comment) <#5 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63D62R5S6YHH6EWE353WUTLX7ANCNFSM6AAAAAAT5TIHK4 https://github.com/notifications/unsubscribe-auth/A2CF63D62R5S6YHH6EWE353WUTLX7ANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.> One more thing, when training with mmsegmentation, the codes will be automatically packed as code.tar.gz. If you can find the training log, then you should also find corresponding code.tar.gz. — Reply to this email directly, view it on GitHub <#5 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2CF63HCTRQMNNGALOIPX6TWWNHJTANCNFSM6AAAAAAT5TIHK4 . You are receiving this because you were mentioned.Message ID: @.>

2 weeks have been passed, is there anything progressive?

user0407 commented 1 year ago

@super233

Please find the training log in this link

wzr0108 commented 1 year ago

请问你复现了吗?我也需要DAFormer + CLUDA的代码