JCruan519 / VM-UNet

(ARXIV24) This is the official code repository for "VM-UNet: Vision Mamba UNet for Medical Image Segmentation".
Apache License 2.0
459 stars 18 forks source link

depths=[2, 2, 9, 2] and depths_decoder=[2, 9, 2, 2]? #71

Open tubixiansheng opened 1 month ago

tubixiansheng commented 1 month ago

Hello, I would like to ask: in the architecture diagram of the model in the paper, each layer's VSS block is ×2, but in the code, why are the settings depths=[2, 2, 9, 2] and depths_decoder=[2, 9, 2, 2]?

lwtgithublwt commented 1 month ago

Hello, I would like to ask: in the architecture diagram of the model in the paper, each layer's VSS block is ×2, but in the code, why are the settings depths=[2, 2, 9, 2] and depths_decoder=[2, 9, 2, 2]?

我看config中有相关设置

Shay-ACC commented 2 weeks ago

Maybe you just executed vmunet.py to print the model, but actually "depths=[2, 2, 2, 2] and depths_decoder=[2, 2, 2, 1]" have been declared in train.py where setting_config calls "class setting_config" in "config_setting.py".