junyuchen245 / TransMorph_Transformer_for_Medical_Image_Registration

TransMorph: Transformer for Unsupervised Medical Image Registration (PyTorch)
MIT License
449 stars 76 forks source link

CTDataset error #50

Closed morteza89 closed 1 year ago

morteza89 commented 1 year ago

Here for y, it has got a fixed address: 'D:/DATA/Duke/XCAT/phan.pkl'

For training TransMorph Affine Model, what should we change it to? , or what is appropriate y for each input x.

https://github.com/junyuchen245/TransMorph_Transformer_for_Medical_Image_Registration/blob/639e1fc134e1cbc61c78a1e054cfcb6f4bafee38/TransMorph_affine/data/datasets.py#L24

ljjiayou commented 1 year ago

@morteza89 Has this problem been solved? I also encountered this problem when running。

kvttt commented 1 year ago

I have encountered the same problem.

morteza89 commented 1 year ago

@morteza89 Has this problem been solved? I also encountered this problem when running。 Here is how I fixed that:

class CTDataset(Dataset): def init(self, data_path, atlas_path, transforms): self.paths = data_path self.atlas = atlas_path self.transforms = transforms

def one_hot(self, img, C):
    out = np.zeros((C, img.shape[1], img.shape[2], img.shape[3]))
    for i in range(C):
        out[i,...] = img == i
    return out

def __getitem__(self, index):
    path = self.paths[index]
    x, _ = pkload(path)
    y = pkload(self.atlas)
    y = np.flip(y, 1)
    x, y = x[None, ...], y[None, ...]
    x,y = self.transforms([x, y])
    x = np.ascontiguousarray(x)
    x = torch.from_numpy(x)
    y = np.ascontiguousarray(y)
    y = torch.from_numpy(y)
    return x, y

def __len__(self):
    return len(self.paths)
ljjiayou commented 1 year ago

@morteza89 thanks,Let me try to run it like this

Rudeguy1 commented 1 year ago

@morteza89 Hello, I think your understanding is very good. May I ask some questions? In its source code, there are

 x, x_seg= pkload(self.paths[index])
        y = pkload('D:/DATA/Duke/XCAT/phan.pkl')
        y_seg = pkload('D:/DATA/Duke/XCAT/label.pkl')
        y = np.flip(y, 1)
        y_seg = np.flip(y_seg, 1)
        # transforms work with nhwtc
        x, y = x[None, ...], y[None, ...]
        x_seg, y_seg = x_seg[None, ...], y_seg[None, ...]
        x,y = self.transforms([x, y])
        x_seg, y_seg = self.transforms([x_seg, y_seg])
        x = np.ascontiguousarray(x)
        x = torch.from_numpy(x)
        y = np.ascontiguousarray(y)
        y = torch.from_numpy(y)

        x_seg = np.ascontiguousarray(x_seg).astype(np.uint8)
        x_seg = torch.from_numpy(x_seg)
        y_seg = np.ascontiguousarray(y_seg).astype(np.uint8)
        y_seg = torch.from_numpy(y_seg)

This x, x seg refer to the original image and organ segmentation image of a fixed image? So y, y seg refer to segmentation images of moving images and organs? I see the part you modified, does y refer to moving the image?

morteza89 commented 1 year ago

@morteza89 Hello, I think your understanding is very good. May I ask some questions? In its source code, there are

 x, x_seg= pkload(self.paths[index])
        y = pkload('D:/DATA/Duke/XCAT/phan.pkl')
        y_seg = pkload('D:/DATA/Duke/XCAT/label.pkl')
        y = np.flip(y, 1)
        y_seg = np.flip(y_seg, 1)
        # transforms work with nhwtc
        x, y = x[None, ...], y[None, ...]
        x_seg, y_seg = x_seg[None, ...], y_seg[None, ...]
        x,y = self.transforms([x, y])
        x_seg, y_seg = self.transforms([x_seg, y_seg])
        x = np.ascontiguousarray(x)
        x = torch.from_numpy(x)
        y = np.ascontiguousarray(y)
        y = torch.from_numpy(y)

        x_seg = np.ascontiguousarray(x_seg).astype(np.uint8)
        x_seg = torch.from_numpy(x_seg)
        y_seg = np.ascontiguousarray(y_seg).astype(np.uint8)
        y_seg = torch.from_numpy(y_seg)

This x, x seg refer to the original image and organ segmentation image of a fixed image? So y, y seg refer to segmentation images of moving images and organs? I see the part you modified, does y refer to moving the image?

Correct. x, x_seg are the images that are coming from your training folder during the training process, and y, y_seg should point to the atlas data( image and segmentation organs respectively). And the part I modified in the code is referring to the atlas image as well.

hanquansanren commented 1 year ago

A good trial, thanks your discussion, it helps me a lot ! ! ! ! ! !