Closed innat closed 1 year ago
Heyho @innat,
first of all: I would do a data exploration of depth ranges after applied resampling, to get a update range/histogram of depths with normalized voxel spacing (due to resampling will also dramatically impact shape size).
Then, you have various options:
There are probably more options, but these three are the most popular one I read in the literature and which came into my mind right now.
But be aware, that this operations will always remove information from your image even if you think that this information should not be important. E.g. COVID-19 severity if we want to stick to the lung thorax CT example. There is a correlation with obese patients and pneumonia severity, but if we apply a lung mask, we purely just analyze the lungs and throw away the remaining CT information which could indicate a high body fat ratio.
Hope that this helps with your issue.
Cheers, Dominik
@muellerdo Hi, could you please transfer this ticket to the discussion tab? I think it suits there more.
Let's say, 3D data with shape
(height, width, depth)
. Now, thedepth
is100
. After doing some eda, we've found that, out of 100 slices, most of them are black, especially at the beginning and at the end. Let's say, the middle range with(30 - 70)
are the most informative slices.Now, this was only with one 3D data. There might be a whole batch of 3D data with different numbers of
depth
slices. For exampleMost of cases, the begging and ending slices are mostly black, and some middle slices have the most salient features. My question is, what are the best strategies are taken in this case to adaptively pick relevant slices?.
My current assumption is to calculate the approximate middle range. For example: