dominicmaguire / bac-model-code

MIT License
1 stars 0 forks source link

Preprocessing for segmentation #1

Open kail85 opened 1 year ago

kail85 commented 1 year ago

Does your segmentation model run on presentation dcm file? If so, what are the pre-processing steps for inference? This part is not mentioned in your paper, so I guess they are uint16 -> uint8 -> gray scale to RGB. However, the model predicts everything as background.

dominicmaguire commented 1 year ago

As described in the paper, 12-bit dcm files were converted to 16-bit png files using pydicom. For segmentation these were cropped to the breast and converted to RGB.

kail85 commented 1 year ago

Yeah, I think I followed these steps

fun = @(block_struct) semanticseg(block_struct.data, net, 'outputtype', 'uint8', ExecutionEnvironment='gpu'); pred_mask = blockproc(img_rgb, [512, 512], fun, PadPartialBlocks=true, UseParallel=true, DisplayWaitbar=false);


I don't think you need to crop and pad the image for inference since the images for training were not resized.

My questions are
- did you use for-processing or for-presentation images for training?
- did I miss anything in my steps? I cannot even get breast region segmentation. Every pixel is classified as background.
- could you please include one sample image for inference demo? 

Thanks!
dominicmaguire commented 1 year ago

I used for-presentation images for training and testing and converted dcm images to pngs with identical code to above. I also used the same MATLAB code above to convert the pngs to RGB.

The segmentation model was trained using image patches with MATLAB's randomPatchExtractionDatastore and creates a segmented image with the segmentImagePatchwise function.

I no longer have access to the dataset images but there are FFDM images available here to download. Case 13 should be useful.

Once converted to RGB you can run the trained network like so:

im = imread("yourRGBImage");
figure
imshow(im)
segmentedImage = segmentImagePatchwise(im, net, [512 512]);
figure
B = labeloverlay(im,segmentedImage);
imshow(B)
kail85 commented 1 year ago

Still no success on case 13. I was hoping at least getting breast segmentation. Every pixel is still classified as background. Below is my end to end implementation.

%%
dcm_path = 'MammoTomoUPMC_Case13\20080716 090904 [ - MAMMOGRAM DIGITAL SCR BILAT]\Series 002 [MG - L CC]\1.3.6.1.4.1.5962.99.1.2280943358.716200484.1363785608958.164.0.dcm'

dcm_img = dicomread(dcm_path);

%% Pre-process
img_gray = im2uint8(mat2gray(dcm_img, [0, 2^12-1]));
img_rgb = cat(3, img_gray, img_gray, img_gray);

%% Inference (author's original function)
pred_mask = segmentImagePatchwise(img_rgb, net, [512, 512]);

Any chance could you please provide a mat file containing the checkpoint as DAGNetwork rather than separated layer graph and weights. Just want to make sure I imported your weights properly.

dominicmaguire commented 1 year ago

Have you omitted the dcm -> png conversion step?

kail85 commented 1 year ago

is it necessary? png is just a data container to provide uint8 or uint16 storage. You can save the data to ram as uint16, which is the same as saving to png in uint16.

dominicmaguire commented 1 year ago

Try im2uint16 rather than im2uint8. It should work then.

Case 13 is actually a diagnostic rather than a screening image. A more suitable image is found in Case 3. It will show the breast segmentation although there is no BAC present. I will have a look through the other cases to see if there are any with BAC present.