Open anwai98 opened 8 months ago
Other users have faced the issue as well:
Mention: I have faced Point 1 extensively, and Point 2 in ViMUNet for the largest vim model (vim_b
)
Edit: Both the issues are taken care of now (in U-Mamba and ViM-UNet)
Also, a nice spot to track Vision Mamba-related work: https://github.com/VisionMamba
UMambaEnc is a little weird. The training didn't work at all for 3/5 folds.
Suspicions:
PS: UMambaEnc - mamba implementation in the entire encoder; UMambaBot - mamba bottleneck layer b/w encoder and decoder