kbressem / prostate158

MIT License
26 stars 7 forks source link

Segmentation output looking bad #11

Closed LearningForeverMS closed 4 months ago

LearningForeverMS commented 4 months ago

Hi

I am trying to use this model (anatomy.pt) and inferencing over one of the nii file - t2.nii.gz (in train/021 folder)

There are three class labels predicted - 0, 1, 2 but these are very scattered and do not give any pattern e.g. for this test file t2.nii.gz, I am getting below

image

here left is input image and right is segmentation output.

Please help what I am doing wrong.

Regards Meenu

kbressem commented 4 months ago

Are you sure, you are loading the model?

LearningForeverMS commented 4 months ago

Thanks for response. yes. Below is code used for same -

def load_config():

Ideally, load this configuration from your actual YAML file

return {
    'ndim': 3,
    'data': {'image_cols': ['t2']},
    'model': {
        'out_channels': 3,
        'channels': [16, 32, 64, 128, 256, 512],
        'strides': [2, 2, 2, 2, 2],
        'num_res_units': 4,
        'act': 'PRELU',
        'norm': 'BATCH',
        'dropout': 0.15
    }
}

def main(): config = load_config()

# Define the path to your trained model checkpoint and the NIfTI file for inference
model_path = 'models/anatomy.pt'
image_path = 'prostate158/train/021/t2.nii.gz'

# Set up the device for inference
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Define transformations for the input data
val_transforms = Compose([
    LoadImaged(keys=["image"]),
    AddChanneld(keys=["image"]),
    ScaleIntensityd(keys=["image"]),
    ToTensord(keys=["image"])
])

# Create a dataset with a single sample
data = [{"image": image_path}]
val_ds = Dataset(data, transform=val_transforms)
val_loader = DataLoader(val_ds, batch_size=1)

# Load the trained model with configuration parameters
model = UNet(
    spatial_dims=config['ndim'],
    in_channels=len(config['data']['image_cols']),
    out_channels=config['model']['out_channels'],
    channels=config['model']['channels'],
    strides=config['model']['strides'],
    num_res_units=config['model']['num_res_units'],
    act=config['model']['act'],
    norm=config['model']['norm'],
    dropout=config['model']['dropout'],
).to(device)
model.load_state_dict(torch.load(model_path, map_location=device))
model.eval()

# Perform inference and visualize results
with torch.no_grad():
    for batch_data in val_loader:
        inputs = batch_data["image"].to(device)
        outputs = sliding_window_inference(inputs, (96, 96, 96), 4, model)
        outputs = torch.argmax(outputs, dim=1).detach().cpu().numpy()[0]

....... This anatomy model was created by us following exact steps of training documentation.

Hoping I am not doing any silly mistake :)

kbressem commented 4 months ago

Can you verify the pipeline with the openly available weights?

https://zenodo.org/records/7040585

LearningForeverMS commented 4 months ago

yes, we tried with the available weights but with that only one class (background) is available after argmax -

inputs = batch_data["image"].to(device) outputs = sliding_window_inference(inputs, (96, 96, 96), 4, model) num_classes = outputs.shape[1] print("Number of classes predicted by the model:", num_classes) outputs_final = torch.argmax(outputs, dim=1) outputs_final = outputs_final.detach().cpu().numpy()[0]

unique_labels = np.unique(outputs_final) print("Unique predicted class labels:", unique_labels)

image
kbressem commented 4 months ago

The model works well in the MONAI model zoo, so the issue is not with the weights. Do you apply the argmax correctly? What is the shape of the output?

Also why the [0] here: outputs_final = outputs_final.detach().cpu().numpy()[0]

LearningForeverMS commented 4 months ago

even without [0[, result same -

outputs final shape- (1, 270, 270, 24)

kbressem commented 4 months ago

Could you check that your preprocessing is the same as here:

https://github.com/Project-MONAI/model-zoo/blob/dev/models/prostate_mri_anatomy/configs/inference.json

LearningForeverMS commented 4 months ago

Thank you so much for pipeline. yes, it is working now. I am able to create overlay like below (axis 2) 9

I have now exported overlay slices along 3 axis and created 3 stacks. Now I want to combine these and export as a single nii file. Do you have some existing code which can do this ?

Thanks much in advance !

kbressem commented 4 months ago

Good to hear. It is straight forward to do this. Stack them to a 3d tensor then use SimpleITK. Something like this:

arr = np.squeeze(outputs_final)
im = sitk.GetImageFromArray(arr)
sitk.WriteImage(arr, filename)

Good luck