Project-MONAI / research-contributions

Implementations of recent research prototypes/demonstrations using MONAI.
https://monai.io/
Apache License 2.0
990 stars 324 forks source link

How to get segmentation result with Test.py about img0060 ~ img0080 on BTCV #214

Open whduddhks opened 1 year ago

whduddhks commented 1 year ago

In utils/data_utils.py

test_transform need label

https://github.com/Project-MONAI/research-contributions/blob/4af80e1e2dcacfde8255fcf32616184990d5bf40/SwinUNETR/BTCV/utils/data_utils.py#L119

But there is no label for test dataset.

And in test.py

https://github.com/Project-MONAI/research-contributions/blob/4af80e1e2dcacfde8255fcf32616184990d5bf40/SwinUNETR/BTCV/test.py#L91

need label same with data_utils.

So how to get segmentation result for test dataset?

Thank you, whduddhks

Mentholatum commented 1 year ago

Same question, have you solved it now?

whduddhks commented 1 year ago

Maybe i solved it.

So, this is the utils.data_utils.py

test_transform = transforms.Compose(
    [
        transforms.LoadImaged(keys=["image"]),
        transforms.AddChanneld(keys=["image"]),
        # transforms.Orientationd(keys=["image"], axcodes="RAS"),
        transforms.Spacingd(keys="image", pixdim=(args.space_x, args.space_y, args.space_z), mode="bilinear"),
        transforms.ScaleIntensityRanged(
            keys=["image"], a_min=args.a_min, a_max=args.a_max, b_min=args.b_min, b_max=args.b_max, clip=True
        ),
        transforms.ToTensord(keys=["image"]),
    ]
)

And in line 133 test_files = load_decathlon_datalist(datalist_json, True, "test", base_dir=data_dir)

For Test.py

for img_path in raw:
      nii_img = nib.load(img_path)

      shape.append(nii_img.get_fdata().shape)
      affine.append(nii_img.affine)

with torch.no_grad():
      for i, batch in enumerate(val_loader):
            val_inputs = batch["image"].cuda()
            # original_affine = batch["label_meta_dict"]["affine"][0].numpy()

            h, w, d = shape[i]
            target_shape = (h, w, d)
            img_name = batch["image_meta_dict"]["filename_or_obj"][0].split("/")[-1]
            print("Inference on case {}".format(img_name))

            val_outputs = sliding_window_inference(
                val_inputs, (args.roi_x, args.roi_y, args.roi_z), 4, model, overlap=args.infer_overlap, mode="gaussian"
            )
            val_outputs = torch.softmax(val_outputs, 1).cpu().numpy()
            val_outputs = np.argmax(val_outputs, axis=1).astype(np.uint8)[0]
            val_outputs = resample_3d(val_outputs, target_shape)

            nib.save(
                nib.Nifti1Image(val_outputs.astype(np.uint8), affine[i]), os.path.join(output_directory, img_name)
            )

I got 61 ~ 80 segmentation result by this.

Thank you

Dasha484 commented 7 months ago

Can I take a look at the complete content of the 'Test.py' file? I don't have any definitions related to 'raw,' so it will result in an error.