Closed eleplantttt closed 1 year ago
Hi @eleplantttt,
I am not sure why the TRE is not decreasing. Usually, people may solve this problem by switching to another Pytorch with a proper cuda version that matches your GPU's cuda version.
For the hyperparameters, I will give you an example here: "Brats_NCC_disp_fea6b5_AdaIn64_t1ce_fbcon_occ01_inv1_a0015_aug_mean_fffixedgithub" Brats: brats dataset NCC: NCC similarity disp: displacement vector field fea6: start_channel=6 b5: 5 conditional image registration blocks AdaIn64: conditional image registration block with size 64 of latent embedding fbcon: forward-backward consistency loss occ01: --occ 0.01 inv1: --inv_con 0.1 a0015: threshold (alpha) aug: augmentation mean: mean error used when determining the threshold fffixed: bug fixed for 3 times github: for github
Thanks for your reply,Also,I want you to confirm that the training set data is resampled from the original data set to get 160 × 160 × 80 with 1.5 × 1.5 × 1.94 mm3 isotropic resolution, while the verification set data and test set data are still 240 × 240 × 155 with 1.0 × 1.0 × 1.0 mm3 isotropic resolution or 160 × 160 × 80 with 1.0 × 1.0 × 1.0 mm3 isotropic resolution?
For the model input, we downsampled the image scan to 160 x 160 x 80. During the validation and testing phase, we also used 160 x 160 x 80 as model input and predict a 160 x 160 x 80 deformation field. Then, we upsample the 160x160x80 deformation field to match the original resolution (240 x 240 x 155) when computing the TRE.
I tested the code with PyTorch 1.9.0+cu111 and Python 3.7, and I divide the dataset into Group 1 https://github.com/cwmok/DIRAC/issues/2#issuecomment-1321023467
But the Tre loss is still not will, the log is followed: Validation TRE log for baseline: 0:12.487116771496558 2000:12.37371896878371 4000:12.394197923100805 6000:12.38666225157159 8000:12.36765607362591 10000:12.380144045118842 12000:12.371554591351336 14000:12.408534770433159 16000:12.39859589896192 18000:12.395386964509843 20000:12.39804090395997 22000:12.361471513468787 24000:12.367681955486278 26000:12.349141612545356 28000:12.330313223098944 30000:12.342290339570624 32000:12.350605762215306 34000:12.370017916024203 36000:12.31625433505047 38000:12.387440521933634 40000:12.317589487243632 42000:12.323358883230958 44000:12.36420845655787 46000:12.314409053862938 48000:12.290446183565845 50000:12.295824481368367 52000:12.330525846060256 54000:12.298484747169955 56000:12.275610918378481 58000:12.313097093559612 60000:12.25606998105038 62000:12.306192251461749 64000:12.28065347142032 66000:12.275133004210426 68000:12.244126843600679 70000:12.299921894053139 72000:12.24138714163893 74000:12.25715705496191 76000:12.25941246732856 78000:12.246555462799964 80000:12.21263937018973 82000:12.217094181256682 84000:12.236345034861085 86000:12.225464009863117 88000:12.196547141519458 90000:12.248292545283503 92000:12.23647574916546 94000:12.203680990634 96000:12.188105830414758 98000:12.21968314071012 100000:12.194088823694717 102000:12.20491990325111 104000:12.175056997698276 106000:12.18581277372554 108000:12.201604246043711 110000:12.213492063121887 112000:12.171283643445134 114000:12.206277561991891 116000:12.22103524874248 118000:12.153908183857325 120000:12.186575405018086 122000:12.156563864072956 124000:12.17991741972445 126000:12.15312451624883 128000:12.153799381634721 130000:12.126739500808773
By the way,I found that you offered two trained models,but I don not understand the mean of the name of model like Brats_NCC_disp_fea6b5_AdaIn64_t1ce_fbcon_occ01_inv5_a0015_aug_mean_fffixed_github_stagelvl3_64000
In the middle of the name disp_fea6b5_AdaIn64 and fbcon_occ01_inv5_a0015_aug_mean_fffixed ,it may be represented hyperparameter,Does 01 in occ01 mean that the mask loss parameter is equal to 0.1 or 0.01? Does 0015 in a0015 mean 15 or 0.15? Does 5 in inv5 mean that the inv_con parameter is equal to 5?
In the end,thanks for your great work! I'll appreciate it you can provide some suggestions.