devavratTomar / SST

Self-Supervised Generative Style Transfer for One-Shot Medical Image Segmentation
26 stars 7 forks source link

Some issues of image pre-processing #2

Open AXu0511 opened 1 year ago

AXu0511 commented 1 year ago

Is this line of code trying to control the intensity value of the image in the range of -1 to 1? image

Why does this line of code add 1 to the image and divide it by 2?Is it to recover the true intensity value of the image and then compute the loss? image

devavratTomar commented 1 year ago

Hello. Yes, The above line of code makes the intensities in the range [-1, 1]. For the second question, the loss function assumes images have a positive range. So we normalize them again in the range [0, 1].

AXu0511 commented 1 year ago

Thanks a lot for your reply. In fact I am curious as to why the intensity of the image needs to be controlled in the range of -1 to 1. Is this operation necessary? Also in the generating fake data phase, the code in line 85 adjusts the image intensity value from -1 to 1. Is the intensity value of predicted_img in line 118 is in the range of -1 to 1? Does the code in line 120 brings the intensity value of the image back to 0 to 1 again? And what specific transformation is the 124 lines of code trying to apply to the image? Finally, I would like to ask if the data I generated via using your code are very similar, with no significant deformation, and the overly similar data makes the segmentation network perform poorly. How can I solve this problem? 1668404394346

devavratTomar commented 1 year ago

Hi. I don't think keeping the intensity range between -1 and 1 is necessary. The way we pre-processed the data originally was to make the highest intensity in the volume scale to 1 and lowest scale to 0 (before saving to the disk). You can perform your own normalisation. The networks were trained in the range of 1 and -1. That is why you need to multiply by 2 and 1 is subtracted. (Or the other way around to bring it back to 0 and 1 range before saving on the disk).

devavratTomar commented 1 year ago

The data generated should be diverse (similar to the original unlabelled training dataset). If the data is not diverse in style, UNet may not perform well. In 124 we create a mask to discard the pixel segmentation values that are very dark (the lowest pixel value corresponds to 0). You may ignore this line as well.

AXu0511 commented 1 year ago

The data generated should be diverse (similar to the original unlabelled training dataset). If the data is not diverse in style, UNet may not perform well. In 124 we create a mask to discard the pixel segmentation values that are very dark (the lowest pixel value corresponds to 0). You may ignore this line as well.

Thanks again. I will continue to try to understand your work and code.