Kaiseem / DAR-UNet

[JBHI2022] A novel 3D unsupervised domain adaptation framework for cross-modality medical image segmentation
Apache License 2.0
41 stars 5 forks source link

Slices for training Disentangled GAN #5

Closed nihpat19 closed 8 months ago

nihpat19 commented 1 year ago

When training the disentangled GAN as part of the i-2-i stage, do the domain slices need to match in terms of the overall context? (i.e. should they both belong to the same section of the scan)

Kaiseem commented 1 year ago

Yes. You should make sure the two domain share the same content, i.e., the same section of the scan. Otherwise, the domain-spcific content feature shall be disentangled as style, and cause the lossy transformation problem. In brief, you can do the style transfer between CT abdominal and MRI abdominal data, but you can't do the style transfer between CT brain and MRI abdominal data.

nihpat19 commented 1 year ago

Ok, but does each scan have to come from the same section of the abdomen? Ie. If I'm doing style transfer between CT liver and MRI liver data, should the first slice from the CT domain come from the same Z-axis location in the liver as the first slice from the MR domain?

Kaiseem commented 1 year ago

No, you don't have to align them explicitly. Since the probability of the images with the specific z-axis location should be the same for both domains (i.e., 20% slices contain liver), you can just simply feed them all into the GAN, as the GAN shall learn such probabilities. Even though the probabilities for both domain may not be exactly the same, i don't think it is a big problem.

In addition, it may be a potential improvement direction for my method, as i use 2D GAN for 3D data style transfer slice-by-slice. However, if you use 3D GAN, you may need to handle other issues, e.g., MRI images are captured when people lying down, whilst the CT images are captured when people standing. This also may introduce the content mismatch in the style transfer problem.

nihpat19 commented 1 year ago

Ok. Because I've tried training the model on liver CT to liver MR segmentation using the CHAOS dataset, and I have not been getting good validation results from the second stage segmentation network. I'm not sure what exactly I'm doing wrong. In terms of preprocessing, I cropped both the CTs and MRs down to 256 x 256 x d and padded them to 512 x 512 x d. I then rescaled the intensities to between 0 and 1 using MinMax Normalization before first saving the scans and then stacking the 2D slices together for each domain to do the image translation. I only took the first 400 or so slices from the CT domain images to make sure that the amount of slices matched the amount of slices from the MR domain.

For the segmentation validation, I found that I could only use 2 of the CHAOS MRs, as these were the only ones that seemed to have enough slices to match the size of the window that was used at that stage.

I don't know exactly what I'm doing wrong, and I have spent close to a month trying to get this to work.

Kaiseem commented 1 year ago

I don't know why, but i am willing to help you. However, the generated dataset for stage 2 is quiet large (~20-30GB), which may be hard to transfer. Can you use BaiduPan https://pan.baidu.com to transfer the data? i suggest you to buy one month VIP for fast download speed if you can use it.

Meanwhile, which one do you prefer? the trained checkpoints or the processed data? Are you trying to start UDA researches?

nihpat19 commented 1 year ago

Thank you very much. Could you upload the data to Google Drive so that I can download it? I can try and use BaiduPan, but Google Drive would be easier.

I would prefer both the checkpoints and the processed data. I am doing UDA research into Liver CT --> Liver MR and the other way around.

Kaiseem commented 1 year ago

Sure, let's keep in touch. Will let you know when i upload it.

nihpat19 commented 1 year ago

Awesome. Please let me know when you upload it. Thank you very much for your help.

Kaiseem commented 1 year ago

@nihpat19

https://pan.baidu.com/s/1o1SkmxiXj45IbsIm8F1hgQ?pwd=66vs

Contains the dataset and checkpoints

nihpat19 commented 1 year ago

Hi, thank you very much for uploading the dataset and checkpoints. I'm trying to access it, but there seem to be security issues with using pan.baidu.com. If possible, could you upload it to Google Drive and send me a link from there? If not, I will try to resolve whatever issues are preventing me from accessing the link, but I don't know if I can.

nihpat19 commented 1 year ago

I tried to create an account on pan.baidu.com, but I got an error saying that accounts from overseas could not be created. If you can send the preprocessed data or the saved models to me via Google Drive, that would be ideal.

Kaiseem commented 1 year ago

Sry for late response, i'm busy for paper writing. I will update the datasets to the google drive. Will let you know when i upload it.

Kaiseem commented 1 year ago

@nihpat19

https://drive.google.com/file/d/1tEK1AUF3som-q1ri-BdKRyHXw_xo0bKi/view?usp=sharing

If you have any questions, let me know.

nihpat19 commented 1 year ago

Hi, @Kaiseem. Thank you very much for sending the data. I just have one question. I'm assuming you used the same preprocessing steps mentioned in your paper. Did you use different spacings for the CTs and the MRIs as mentioned in the paper? If so, why?

Thanks again for all your help.

Kaiseem commented 1 year ago

Hi, i utilized the same preprocessing steps mentioned in my paper, though there is a little difference due to irresistible factor. For CTs, as the spacing information can be accessed in the .nii.gz files, so for the CTs, they have been spatially noralized to have the spacing of [4, 1, 1]. However, the MRIs are dicom files, which do not contain spacing information. But it is said that "The data sets are acquired by a 1.5T Philips MRI, which produces 12 bit DICOM images having a resolution of 256 x 256. The ISDs vary between 5.5-9 mm (average 7.84 mm), x-y spacing is between 1.36 - 1.89 mm (average 1.61 mm)." So when i process the MRIs, i just half the z spacing to ensure a similar ratio with CTs. However, i didn't think it quiet matter. Wish my my answer will help you.

nihpat19 commented 1 year ago

Ok. So as long as the MRIs have a similar spacing ratio to the CTs, then it should be fine?

Kaiseem commented 1 year ago

Yes. As long as the source and target domain images have a close spacing ratio of [4,1,1].