JoHof / lungmask

Automated lung segmentation in CT
Apache License 2.0
657 stars 151 forks source link

Segmentation Results Are All Zeros #108

Closed VinyehShaw closed 2 months ago

VinyehShaw commented 2 months ago

Hello,

I am using the lungmask library for lung segmentation on CT-RATE here is example nii.gz data download link from hugging face dataset, but I am encountering an issue where the segmentation results are all zeros. Despite trying to rescale and adjust the HU values of the input image, the problem persists.

here is my code: ` import SimpleITK as sitk import os from lungmask import mask import nibabel as nib import numpy as np

def rescale_hu(image): array = sitk.GetArrayFromImage(image) rescaled_array = (array - np.min(array)) * 4095 / (np.max(array) - np.min(array)) rescaled_array -= 1024 # Adjust the minimum value to -1024 return sitk.GetImageFromArray(rescaled_array)

if name == “main”: """Filename under path example_data/ctvit-transformer/1 or 2 """ input_path = "./exampledata/ctvit-transformer/2/BGC3109958/Thorax 1,50 Br40 S3(512, 512).nii.gz" inputdir, = os.path.split(input_path)

"""Read file"""
input_image = sitk.ReadImage(input_path)
spacing = input_image.GetSpacing()
direction = input_image.GetDirection()
origin = input_image.GetOrigin()

"""Rescale HU values"""
input_image = rescale_hu(input_image)

"""Check HU value range after rescaling"""
array = sitk.GetArrayFromImage(input_image)
print(f'HU value range after rescaling: {np.min(array)}, {np.max(array)}')

"""Segmentation"""
segmentation = mask.apply(input_image)  # default model is U-net(R231)
segmentation[segmentation == 2] = 1

"""Save results"""
out_label = sitk.GetImageFromArray(segmentation)
out_label.SetSpacing(spacing)
out_label.SetOrigin(origin)
out_label.SetDirection(direction)

output_path = os.path.join(input_dir, "seg.nii.gz")
sitk.WriteImage(out_label, output_path)
print(f"Segmentation saved to {output_path}")

"""Read and check segmentation results"""
segmented_img = nib.load(output_path)
segmented_data = segmented_img.get_fdata()

# Print unique values
unique_values = np.unique(segmented_data)
print(f'Unique values in {output_path}: {unique_values}')

` Could you please provide guidance on how to resolve this issue?

Thank you for your assistance.

JoHof commented 2 months ago

Hi Vinyeh,

sure, happy to help. This is an unfortunate case of improper data handling by the people that prepared that dataset but you can still make it work. There are two aspects that are important (1) the intensities and (2) the image orientation. Both have to be correct for the model to work as expected.

Let's start with HU: You already realized that the intensities are not stored in proper HU values. Usually one would expect that .nii.gz of CTs are always stored in correct HU because the NIFTI header doesn't store slope and intercept values needed to get back to proper HU. In your code you try to rescale with some assumptions (min = -1024 and max = 4095). This might somehow work for selected cases but assumptions are always dangerous. E.g. in you case, the CT is filled with some background value that are much lower than -1024 and the max value is always unknown. Fortunately, the data you provided comes with a metadata.json file that contains the intercept and slope to correctly map back to HU:

import SimpleITK as sitk
import json
import math
import numpy as np

def rescale_hu(image:sitk.Image, intercept:float, slope:float)->sitk.Image:
    array = sitk.GetArrayFromImage(image).astype(np.int32)
    if not math.isclose(slope, 1):
        array *= slope
    if not math.isclose(intercept, 0):
        array += intercept
    out = sitk.GetImageFromArray(array.astype(np.int32))
    out.CopyInformation(image)
    return out

path="/tmp/jo/ctrate/example_data/ctvit-transformer/2/BGC3109958/Thorax 1,50 Br40 S3_(512, 512).nii.gz"
metad="/tmp/jo/ctrate/example_data/ctvit-transformer/2/BGC3109958/Thorax 1,50 Br40 S3_(512, 512)_metadata.json"
input_image = sitk.ReadImage(path)
metad = json.load(open(metad))
intercept = int(metad['RescaleIntercept'])
slope = int(metad['RescaleSlope'])
input_image = rescale_hu(input_image, intercept, slope)

Now to the orientation: This is a quite unfortunate problem in this dataset. The orientation, that is, how the patient is oriented with respect to the image is usually known in medical images. Here, when they converted the data from DICOM to NIFTI they actually corrupted that information so that this information is wrong. However, for most (but for sure not all) CT scans the patients are lying on the back. For this dataset, it seems your only option is to work with that assumption by setting input_image.SetDirection([1,0,0,0,1,0,0,0,1])

So finally, if you give correct input to lungmask you will get a correct output:

input_image = rescale_hu(input_image, intercept, slope)
input_image.SetDirection([1,0,0,0,1,0,0,0,1])
segmentation = inferer.apply(input_image)

image

The orientation issue is a very dangerous one because the model will still produce some output if you feed in images with wrong orientations but the quality will be degraded:

For example, if you don't set the orientation correct:

input_image = rescale_hu(input_image, intercept, slope)
segmentation = inferer.apply(input_image)

you will get an output where pathological areas are missed... image

PS There is another issue in that dataset that is not relevant for lungmask to work. Volumetric medical images like CT or MR come with the information of how much physical space is occupied by a voxel. In this dataset, just like the orientation, the values are wrong as input_image.GetSpacing() seemts to be [1,1,1] everywhere. That's quite problematic because when you measure anything like volume of an organ or a disease or the size of something it will be wrong. So be careful with this dataset as the data hasn't been properly handled in the past.

VinyehShaw commented 2 months ago

Dear Johannes,

Thank you so much for providing the solution—you are my hero! I am now able to correctly segment the general lung area.

The example data I provided is after processing, so the JSON files they provided might have removed a lot of critical information, which caused some misunderstanding.

The original CT-Rate dataset can be found here: . In this dataset, I noticed that they provide metadata attribute files train_metadata.csv that include relevant parameters such as intercept, slope, and ImageOrientationPatient, which you mentioned. However, the ImageOrientationPatient is a six-element list, not the nine-element list like you provided. For example, [1, 0, 0, 0, 1, 0].

I tried combining it with ImagePositionPatient to create something like [187.6328125, -370.6328125, -1248.1, 1, 0, 0, 0, 1, 0], but this was less effective than using [1, 0, 0, 0, 1, 0, 0, 0, 1].

May I ask two more questions?

  1. Is the nine-element orientation you mentioned supposed to be a combination of ImagePositionPatient and ImageOrientationPatient as I tried? Is my approach correct?
  2. Are there any other metadata attributes that I should be using?

Thank you again for your help!

JoHof commented 2 months ago

Hi Vinyeh,

ok, very good that you have this table. With that you can reconstruct most of the relevant metadata. To understand this you need a little bit of linear algebra. The six element list are the two base vectors that point into the image slices and their cross product is perpendicular to them pointing into the z-axis (across the slices). That's how in DICOM the image orientation is stored. In ITK, the full transformation matrix 3x3 is stored. So for itk objects you can compute the matrix like this:

# example case train_10632_a_2, this is a case that doesn't have the standard orientation
imageOrientationDICOM = np.asarray([1, 0, 0, 0, 0, -1], dtype=float) # took this from the table you provided
direction_z = np.cross(imageOrientation[:3], imageOrientation[3:])
imageOrientation = np.hstack((imageOrientation, direction_z))
imageOrientation = imageOrientation.reshape(3, 3).T.flatten()

v.SetDirection(imageOrientation)

to set the correct spacing you would use the XYSpacing and ZSpacing columns:

v.SetSpacing(XYSpacing[0], XYSpacing[1], ZSpacing)