Closed emma000730 closed 8 months ago
Hi, due to class-imbalance, the training of the Synapse dataset is highly unstable when using early-stopping (influenced by randomness), maybe you can change the validation interval to a smaller number, e.g., 5 or 2 or even 1 in https://github.com/xmed-lab/DHC/blob/975641335b902cd7fcd97ffc332ce97b0a6169ce/code/train_dhc.py#L364, or enlarge the early_stop_patience.
Is this data too low? The best evaluation dice is 0.13395972549915314. I modified the validation interval, but it seems it didn't help improve performance
Yes, it should be around 0.4, you can check our training logs in the weights downloading link.
Maybe you need to check whether the data is correctly processed, e.g., through visualization.
Thank you for your response. I have two more questions to ask: 1. The results of running another AMOS dataset are similar to yours, but the Dice score for the Synapse dataset is still around 0.15. Why is there such a difference when I move your dataset and code to my computer? Is this related to the small number of samples in the Synapse dataset? Would you give me some advice? 2. Why does the loss during training appear negative, even as large as -9 or -10?
It is indeed due to the small number of samples in the Synapse dataset, there are several parameters you can modify: learning rate, batchsize, larger weight of unsupervised loss, smaller accumulate_iters of DiffDW and smaller momentum of DistDW.
We apply -dice as the loss rather than 1-dice, so the loss can be negative; when weighted with the class weights, it can be smaller than -1.
Hello! I downloaded your preprocessed Synapse dataset and trained according to the instructions, but the training results did not improve. What could be the reason for this?