Closed 1164094277 closed 3 years ago
@1164094277 There are two potential problems. First, it seems that you have over-regularized the deformation field such that the magnitude of the deformation field is way too small to register the image. Try to use lambda1 = 100, lambda2 =2, lambda3 = 0.1, iteration = 160000 instead. This should allow the model to register images with large initial differences. Empirically, iteration = 10000 is not sufficient for the model to converge.
Second, it is possible that you are using the raw image in scanner space, which may not be pre-processed/affine aligned, or the background intensity is not zero. Please use the corrected image aligned in template space "aligned_norm.nii.gz" instead (in Adalca's repository). You need to factor out the linear misalignment of the data. (as the registration process in SyN includes affine registration, it may perform better if the data is not affine pre-registered to common template space.)
loss 4 is calculating the Jacobian determinant of the deformation field. It is always zero because 1) over-regularized deformation field such that it is always smooth and close to the identity; 2) the deformation field is diffeomorphic (bijective and invertible).
Ideally, Jdet should close to zero for all cases but should not equal zero during training. Relaxing the smoothness regularization of the deformation field will solve this problem, i.e., try to use lambda1 = 100, lambda2 =2, lambda3 = 0.1, iteration = 160000 instead.
Would you please let me know whether the above suggestions solve your problem?
@cwmok Thank you for your reply . I use the data which named "aligned_norm.nii.gz".And after reciving your advice I try to use lambda1 = 100, lambda2 =2, lambda3 = 0.1 for training, but loss4 is still zero.
Could you share two sample input data (.nii.gz) and the exact code you are using?
@1164094277 I ran a mini-experiment with your data (no crop, downsampled to (80,96,112)). I found that the reason for the problem could be the inconsistent intensity normalization in the data. Applying the min-max normalization to each scan (such that the intensity of each scan within [0, 1]) can solve this problem. Moreover, since the data has many background pixels, the optimal parameters for the dataset and the convergence speed are different from the paper. Also, at the very early stage in training, the loss Jdet = 0 is normal as the magnitude of the predicted deformation field is small. Yet, the Jdet loss should not be zero for all the cases in the middle of training.
I will update the code today. I will get back to you if there are any updates. Thank you for reporting the issue.
@cwmok Thank you very much!
@1164094277
I have updated the code by adding a min-max normalisation to the data generator. Please download the latest version of the code. If you want to train with the data in https://github.com/adalca/medical-datasets/blob/master/neurite-oasis.md, please set Dataset_epoch(names, norm=True)
at line 77 in Train_sym_onepass.py
and norm = True
at line 57 in Test_SYMNet.py
.
Currently, I tried a mini-experiment with your data (no crop, downsampled to (80,96,112)) and set (--iteration 160000, --local_ori 100, --magnitude 0.001, --smooth 3). I am not sure whether this is the optimal set of hyperparameters for your data, but the loss value during training looks good to me. (-sim_full converges toward -1, -smo is not steady, -Jdet occasionally not zero)
Also, I want to emphasise that setting iteration = 10000 for training is not sufficient for the model to converge. Even if you set the optimal hyperparameters, you cannot achieve good results from the model with just 10000 iterations. Please try to evaluate the model with at least 80000 iterations.
All in all, hyperparameter tuning in deep learning-based image registration is difficult. Recently, we have proposed a conditional deformable image registration method that enables rapid hyperparameter tuning. If you are interested in it, please visit https://github.com/cwmok/Conditional_LapIRN.
We are looking forward to seeing your results!
@cwmok Thank you very much ! I will try it later.
1.about dice? I use the same data set as your for training, but the dice is very low. My data is from https://github.com/adalca/medical-datasets/blob/master/neurite-oasis.md,i use 255 volumes for train,and only did cropping of these volumes, I use the initial parameters(lambda1 = 1000, lambda2 =5, lambda3 = 1, iteration = 160000) for training, to get the dice is only 0.5369,but the dice of SyN is 0.5921. I found the iteration is set to 10000 and 160000 results are similar, for the other parameters change also didn't get better results.lambda1 = 1000, lambda2 = 5, lambda3 = 1, iteration = 10000 , dice = 0.5070 ; lambda1 = 1000, lambda2 = 3, lambda3 = 0.1, iteration = 10000 , dice = 0.4681(The same parameters with the paper) ; lambda1 = 1000, lambda2 = 5, lambda3 = 0.01, iteration = 10000 , dice = 0.5210. 2.Why loss4 is always equal to 0 in the process of training?