Closed aeinkoupaei closed 8 months ago
Hi, i upload the prepare process for abdominal datasets.
you can take this as reference for your own datasets. Overall, i normalized data into range of [0, 1], and pad them to the size of 512*512
Thank you very much for your response.
Dear Mr. Yao, I have another question about the ct2mr checkpoint. When I loaded the ct2mr.pt model you provided, I encountered an error : RuntimeError: Error(s) in loading state_dict for DARUnet: Missing key(s) in state_dict: "L1_fromimg.conv_skip.1.weight", "L1_fromimg.conv_skip.1.bias", ... Unexpected key(s) in state_dict: "attn4.conv_encoder.0.weight", "attn4.conv_encoder.0.bias", ...
I found that someone else had the same problem and you provided him with another version of ct2mr and mr2ct checkpoints using the link https://pan.baidu.com/s/1wXDjC_zJ3B7G3buhAXgVJg?pwd=19i0. But unfortunately, I am not able to download from this link. Would it be possible for you to share that version with me via Google Drive?
Hi, i upload the ckpt.
https://drive.google.com/file/d/1cm8OEQKxRuHXdXy8R9KqZAeKmmpVsIhU/view?usp=sharing
Best wishes~
I’m so grateful for your help.
Dear Mr. Yao, Firstly, I would like to extend my gratitude for your assistance thus far. Upon evaluating the CTtoMR model you generously provided, I observed some unexpected results, as outlined below:
Dice per class: 0.0788, 0.0374, 0.0137, 0.0127 Overall Dice: 0.0356 ASD per class: 9.4453, inf, inf, inf Overall ASD: inf
Might you have insights into the causes of these observations and any recommendations on how to address them?
Furthermore, I discerned that the validation phase was excluded during the training process of the image-to-image translation model. I'm uncertain about the reasoning behind this decision. Under such circumstances, how might one ascertain the optimal model? Would a visual assessment be recommended?
Thank you in advance for your guidance.
To be honest, I cannot determine what caused such a result because I cannot replicate this issue, and everything is functioning correctly on my end. I suspect it may be due to inconsistencies in the Python version or library versions. Some default parameters may be set differently in various versions of Torch, and the behavior of certain functions may also vary. I cannot confirm this.
You can try adding the following words at the very beginning of the inference file.
import torch.backends.cudnn as cudnn cudnn.deterministic=True
Meanwhile, the results I have on my side are as follows:
[0.94225276 0.91949344 0.89912534 0.9345378 ] [0.57352691 0.5621793 0.72125019 0.37196489] image shape:(320, 320, 30) mask shape:(320, 320, 30) target_shape:[435, 435, 60] [0 1 2 3 4] torch.Size([1, 5, 60, 435, 435]) [0.91267955 0.89565486 0.84346557 0.9006127 ] [0.77483473 0.56661782 0.80452874 0.46962353] image shape:(256, 256, 39) mask shape:(256, 256, 39) target_shape:[465, 465, 78] [0 1 2 3 4] torch.Size([1, 5, 78, 465, 465]) [0.93591267 0.90216845 0.8863868 0.93068665] [1.17235788 0.50188503 0.64777277 1.462603 ] image shape:(320, 320, 34) mask shape:(320, 320, 34) target_shape:[435, 435, 68] [0 1 2 3 4] torch.Size([1, 5, 68, 435, 435]) [0.94669104 0.8950796 0.8966523 0.92914045] [0.45004656 0.43425086 0.50581982 0.34409423] Dice per class: 0.934384 0.9030991 0.8814075 0.92374444 Overall Dice: 0.9106587767601013 ASD per class: 0.7426915185418835 0.5162332531058162 0.6698428784888762 0.6620714135699539 Overall ASD: 0.6477097659266325
If you require the predicted results for visualization, I would be happy to provide them with you.
Thank you for your response.
Dear Mr. Yao, I found your paper very creative. I tried to run the code of your article. But I am not sure that I have done the data preparation for Image-to-image translation correctly. Is it possible to share the code for data preparation for image-to-image translation step?