Closed IsCoelacanth closed 1 month ago
no padding or cropping, input as the original resolution
Thanks for the clarification! So, for handling non-standard image sizes, you just do the Pad and Unpad steps like here and here
def forward(self, x: torch.Tensor) -> torch.Tensor:
if not x.shape[-1] == 2:
raise ValueError("Last dimension must be 2 for complex.")
# get shapes for unet and normalize
x = self.complex_to_chan_dim(x)
x, mean, std = self.norm(x)
x, pad_sizes = self.pad(x)
x = self.unet(x)
# get shapes back and unnormalize
x = self.unpad(x, *pad_sizes)
x = self.unnorm(x, mean, std)
x = self.chan_complex_to_last_dim(x)
return x
Hello,
What spatial dimensions are you using for training on the CMRxRecon dataset? I didn't find any cropping or padding in the data loaders for it.
Is it purely through asymmetric padding in the Unit Layers for the full-resolution scans? [144x512 or 116x382 for the T1/T2 task]