Hi, thanks for your contribution!
When I try to finetune Vmamba on our own dataset (we use vssm_base_224 and load the vssmbase_dp06_ckpt_epoch_241.pth). Surprisingly, our GPU took up a lot of memory despite the Batchsize being set to 4, which made our training impossible on RTX3090 (24GB Memory). Could you please offer any solutions?
Hi, thanks for your contribution! When I try to finetune Vmamba on our own dataset (we use vssm_base_224 and load the vssmbase_dp06_ckpt_epoch_241.pth). Surprisingly, our GPU took up a lot of memory despite the Batchsize being set to 4, which made our training impossible on RTX3090 (24GB Memory). Could you please offer any solutions?