seungjunlee96 / emergency-triage-of-brain-computed-tomography-via-anomaly-detection-with-a-deep-generative-model

8 stars 2 forks source link

Inquiry about the Emergency Triage of Brain Computed Tomography Model #11

Open anantpal07 opened 1 month ago

anantpal07 commented 1 month ago

Dear Mr. Seungjunlee,

I hope this message finds you well. Firstly, I wanted to extend my gratitude for sharing your model for emergency triage of brain computed tomography on GitHub. Your contribution to the field is greatly appreciated.

I am writing to discuss an issue I've encountered while using your model. I have successfully run the model on the demo data provided in the repository, and the results have been promising. However, when applying the model to my own patient data—brain CT DICOM images—I haven't been able to achieve suitable results despite following the preprocessing steps outlined in your documentation.

Specifically, I have generated masks using the CT-BET repository as instructed but still haven't seen significant improvements in the model's performance on my dataset. I believe having access to the dataset you used during the development of the model could be immensely helpful in understanding potential discrepancies and improving the performance on my data.

Would it be possible for you to share the dataset or provide guidance on how to address this issue? Any assistance or insights you could offer would be invaluable in advancing my research and improving patient care.

Thank you once again for your time and for sharing your expertise with the community. I look forward to your response.

Best regards, Anant Paliwal

seungjunlee96 commented 1 month ago

Dear Anant Paliwal,

It seems that the root cause of the performance issue might be related to the custom dataset code for your own data. While the model performed well on the demo data provided in the repository, the preprocessing steps and assumptions made in the custom dataset code might not be compatible with your custom code.

I will look into the data you have provided as below link, and evaluate if custom dataset code works for that file https://github.com/seungjunlee96/emergency-triage-of-brain-computed-tomography-via-anomaly-detection-with-a-deep-generative-model/issues/10#issuecomment-2042619355

Best regards, Seungjun

anantpal07 commented 1 month ago

Dear Seungjun,

Thank you for your prompt response and for considering the issue with my custom dataset code. I agree that this could be a potential root cause of the performance disparity between the demo data and my own patient data.

I appreciate your willingness to evaluate the data I've provided. However, I must inform you that the data I previously shared is now outdated. I have since updated the dataset, and I am sharing the most recent version with you for evaluation.

Furthermore, I want to assure you that I have shared all relevant details, including my preprocessing steps and the results I obtained after running your model on my updated dataset. Your insights into potential discrepancies and improvements will be invaluable as I continue my research in this area.

Please let me know if there are any additional details or resources I can provide to assist you in this evaluation process. I'm eager to collaborate with you to address this issue and advance our understanding in this critical area of research.

Thank you once again for your time and support.

Best regards, Anant Paliwal Dataset.zip

seungjunlee96 commented 1 month ago

Dear Anant Paliwal,

I have reviewed the file (Dataset.zip) you provided and noticed an issue with the brain extraction process. The brain mask in the image (bet.npy) appears to be incorrect, indicating that the brain extraction may not have been applied properly. I recommend checking the brain extraction step to ensure it has been implemented correctly.

Note that I used below code to visualize the figure.

import numpy as np

bet_npy = np.load(bet_npy_path)
n_slices = bet_npy.shape[0]
for i in range(n_slices):
    bet_mask = (bet_npy[i] * 255).astype(np.uint8)
    plt.imshow(bet_mask, cmap = "gray")
    plt.show()
    plt.close()

Best regards,
Seungjun

image
anantpal07 commented 1 month ago

Dear Seungjun,

Thank you for your feedback regarding the brain extraction process. I have generated additional BET files, but the results seem to be similar to the initial one. I am sharing with you another BET file (bet2.npy) along with the results generated from it.

Could you please review the new BET file to check if the brain extraction has been applied correctly? Your insights would be greatly appreciated.

bet file and results.zip

Thank you for your assistance.

Best regards, Anant

anantpal07 commented 1 month ago

Hello Mr. Seungjun,

I hope you're doing well. I wanted to follow up on the issue #11 that I raised regarding the brain extraction process. I understand that you might be busy, but I would greatly appreciate any update you could provide.

If there's any additional information or assistance you need from my side, please let me know. I'm more than happy to help.

Looking forward to hearing from you soon.

Best regards, Anant

seungjunlee96 commented 1 month ago
image

The images above have been masked using a PNG file and the corresponding BET file. However, I'm uncertain if the provided PNG files and BET files are correctly paired. I recommend verifying this by displaying the files.

seungjunlee96 commented 1 month ago

Also, It seems that your bet file has to be rotated by 270 and the png files used different preprocessing steps from the paper (it should be 3 channels not 1 channel).

anantpal07 commented 1 month ago

The PNG file folder I shared previously was the results after running the model, not the input PNG files. The input PNG files were indeed 3-channel (RGB) images. I am again uploading the folder containing the actual input PNG files. png files.zip

and I'll make sure to rotate the BET file by 270 degrees as you suggested.

anantpal07 commented 1 month ago

Thank you for your feedback. I noticed that the preprocessing steps for PNG files are not explicitly mentioned in your paper. Could you please provide the detailed preprocessing steps required to properly process the PNG files, including how to handle the channels?

anantpal07 commented 1 week ago

I followed these steps in my code for preprocessing. Please verify if these are okay

Remove Noise Crop Image Pad Image Combine Image as 3 Channel Save as PNG

Here is the code I used:

def remove_noise(file_path, display=False): medical_image = pydicom.read_file(file_path) image = medical_image.pixel_array

hu_image = transform_to_hu(medical_image, image)
brain_image = window_image(hu_image, 40, 80)

segmentation = morphology.dilation(brain_image, np.ones((1, 1)))
labels, label_nb = ndimage.label(segmentation)

label_count = np.bincount(labels.ravel().astype(int))
label_count[0] = 0

mask = labels == label_count.argmax()

mask = morphology.dilation(mask, np.ones((1, 1)))
mask = ndimage.morphology.binary_fill_holes(mask)
mask = morphology.dilation(mask, np.ones((3, 3)))
masked_image = mask * image

return masked_image

def crop_image(image, display=False): mask = image == 0 coords = np.array(np.nonzero(~mask)) top_left = np.min(coords, axis=1) bottom_right = np.max(coords, axis=1) cropped_image = image[top_left[0]:bottom_right[0], top_left[1]:bottom_right[1]] return cropped_image

def add_pad(image, new_height=512, new_width=512): height, width = image.shape final_image = np.zeros((new_height, new_width))

pad_left = int((new_width - width) // 2)
pad_top = int((new_height - height) // 2)

final_image[pad_top:pad_top + height, pad_left:pad_left + width] = image
return final_image

def windowing_brain(npy, channel=3): dcm = npy.copy() img_rows = 512 img_cols = 512

if channel == 1:
    npy = npy.squeeze()
    npy = cv2.resize(npy, (512, 512), interpolation=cv2.INTER_LINEAR)
    npy = npy + 40
    npy = np.clip(npy, 0, 160)
    npy = npy / 160
    npy = 255 * npy
    npy = npy.astype(np.uint8)
elif channel == 3:
    dcm0 = dcm[0] - 5
    dcm0 = np.clip(dcm0, 0, 50)
    dcm0 = dcm0 / 50.
    dcm0 *= (2 ** 8 - 1)
    dcm0 = dcm0.astype(np.uint8)

    dcm1 = dcm[0] + 0
    dcm1 = np.clip(dcm1, 0, 80)
    dcm1 = dcm1 / 80.
    dcm1 *= (2 ** 8 - 1)
    dcm1 = dcm1.astype(np.uint8)

    dcm2 = dcm[0] + 20
    dcm2 = np.clip(dcm2, 0, 200)
    dcm2 = dcm2 / 200.
    dcm2 *= (2 ** 8 - 1)
    dcm2 = dcm2.astype(np.uint8)

    npy = np.zeros([img_rows, img_cols, 3], dtype=int)
    npy[:, :, 0] = dcm0
    npy[:, :, 1] = dcm1
    npy[:, :, 2] = dcm2

return np.uint8(npy)

def dcm2img1(dcm, windowing=True): img = dcm.astype(np.int32) - 1024 img = crop_center(np.expand_dims(img, 0))

if windowing: 
    return windowing_brain(img)
else: 
    return img

Apply the functions

masked_image = remove_noise(src + str(x) + ".dcm") cropped_image = crop_image(masked_image) padded_image = add_pad(cropped_image) combined = dcm2img1(padded_image)

png_image = Image.fromarray(combined, mode="RGB") png_image.save(dst + str(x) + ".png")