Closed FeatherWaves666 closed 5 months ago
Hello. Thank you for your interest and detailed information about the issue you're encountering with the Mamba framework. It appears that the issue with loss and decode. loss_ce turning into NaN during training may be due to insufficient support for Automatic Mixed Precision (AMP) in Mamba. You could try disabling AMP by removing the --amp flag from your training script to see if this resolves the problem.
Please feel free to communicate with me if you have further issues or need more assistance.
Yes, that solves the problem. Thank you very much. However, I also found a problem that without modifying the parameters, the results obtained were relatively low, far lower than the results of the paper, and the previous mmseg classical model (FCN, etc.). (In the vaihingen dataset, MIoU: 60)
Hi. Please kindly check if you exclude "clutter" class when calculating the mIoU metric. In both ISPRS data sets, the clutter category needs to be excluded when calculating metrics, which I have already pointed out in the article. If you check the benchmarks for these two ISPRS datasets, you will find that everyone's experimental setups are similar. Please kindly check if your experimental setup is the same as mine.
Thank you again for your attention, and feel free to communicate with me.
Thanks for your answer.Climb your mountain of success!
Thank you for your kind words. I wish you smooth sailing in your academic endeavors as well.!
Checklist
Describe the bug 在训练时,loss和decode.loss_ce变成了nan。源码除了数据地址部分,其他没有修改过。
Reproduction
What command or script did you run?
Did you make any modifications on the code or config? Did you understand what you have modified? 源码除了数据地址部分,没有更改。
What dataset did you use? 使用的potsdam,目前只使用过这个数据集。 Environment
Error traceback
If applicable, paste the error trackback here.
希望能得到解答,感谢