920232796 / NestedFormer

NestedFormer: Nested Modality-Aware Transformer for Brain Tumor Segmentation (MICCAI 2022)
Apache License 2.0
38 stars 4 forks source link

training issue #1

Open zerone-fg opened 1 year ago

zerone-fg commented 1 year ago

Hi, I am interested in your work published in MICCAI2022, but when I try to reproduct the result presented in your paper, the training process is very slow, like the training loss drops slowly. I am wondering if it's normal or any pretrained weight of transformer should be loaded? I just downloaded Brats2020 data and loaded it into the network as your description without addtional processing, I am looking forward to your reply, thank you.

920232796 commented 1 year ago

Thank you for your attention, because the nestedformer network uses 4 encoders to process the brats2020 dataset, so it is normal to run a bit slower, it should not be very slow. What device are you using? I am using a V100 or A100 graphics card. Of course you can also try to adjust the network parameters to reduce the number of parameters in the network.