cvlab-stonybrook / SelfMedMAE

Code for ISBI 2023 paper "Self Pre-training with Masked Autoencoders for Medical Image Classification and Segmentation"
Apache License 2.0
112 stars 12 forks source link

A issue while training on my own liver tumor dataset. #8

Open wangbaoyuanGUET opened 8 months ago

wangbaoyuanGUET commented 8 months ago

Hello,developers of SelfMedMAE. Your work is valuable enlightening me so much and I am trying to make some improvements to improve the effect on my liver tumor dataset. Considering that my dataset has only 2 kinds of classes but BTCV has 14 kinds, I has made some changes to the code for the 2 output channels.And the training epoch is set to 1000 because 200 data will cost a lot of time. However, the test Dice was just under 0.6, whereas I was able to achieve around 0.66 when training on top of a normal UNETR without any pre-train. Now I am trying to do some pre-train and hoping to see some changes. If the accuracy increases, I can keep do other imporvment on the SelfMedMAE. Thank you very much!

wangbaoyuanGUET commented 8 months ago

Hi!Developers of SelfMedMAE. I have solved the problem by changing the input_size while embedding the input image to patch. This value defaults to 224, I changed it to the ROI_size (96, 96, 64) then the Dice increase to 0.67. Now I try to do pre-training on my my 400+ data. I set the epochs of pre-training to 10k that will need a lot of time. I wonder that if the number of pre-training rounds is a fixed number, even for more data ? Thank you very much!