Closed ashkanpakzad closed 3 years ago
Thanks, @ashkanpakzad. If different volumes in the dataset end up having different shapes, it won't be possible to use a batch size > 1. Are you planning to train with patches?
Perhaps I've not thought this through for my current project where I'm using whole 3D images. CropOrPad
should already do the job withtarget_shape
necessary to achieve batch size>1.
Though in a general preprocessing strategy, it would be useful for patches as you suggest.
Perhaps I've not thought this through for my current project where I'm using whole 3D images. CropOrPad should already do the job withtarget_shape necessary to achieve batch size>1.
My suspicions are confirmed, then :)
Assuming your spacings are all the same, you could use as target_shape
the largest size across all masks in your dataset and then use mask_name
so the FOV is always centered on each mask.
Yes, very good point, thank you. I'll park it here for the future maybe :)
Thanks anyways, for being happy to contribute!
Reopening after more interest has been shown on #677.
š Feature
Transform that crops given image to the extremes of a bounding box about a given mask.
Motivation
Such a transform is really useful in lung imaging for example, where we cut out the rest of the chest CT (which can include the unwanted abdomen) to crop to a lung segmentation that is fairly easy to acquire.
Pitch
A transform
CropToMask
that takesmask_name
and apadding
variable and crops an input ScalarImage to the bounding box aboutmask_name
with padding to the crop bounding box.If the padding value exceeds the limits of the ScalarImage then pad the output as provided by
padding_mode
.Alternatives
As suggested by @fepegar could also make this an implementation of
CropOrPad
that takestarget_shape=None
andmask_name
.Additional context
Currently, I implement this by preprocessing my data separately. Example for lung mask in chest CT.
The CT images provided from QIN LUNG CT (https://wiki.cancerimagingarchive.net/display/Public/QIN+LUNG+CT#b8d88cce4fd14620bef4e5e35ec3d589) under Creative Commons Attribution 3.0 Unported License (https://creativecommons.org/licenses/by/3.0/). The citations are: