Great work! I have a few questions about the implementation details:
Input format: Could you specify the format of the volume used as input for training?
Training with slices: According to the reply to another issue, masks are used during inference only, can individual slices be inputted during training? If possible, how should these slices be prepared? How should I modify the dataset.py file to do so?
Inference requirements: For inference, could you elaborate on how the mask of a slice in the volume is utilized? Are there specific considerations or settings needed?
Access to pre-trained weights: Are there pre-trained weights available for this model?
Thank you for your help and for sharing this work!
I am sorry that due to the data agreement, we cannot make the pre-trained weights public currently. We would like to do that in the future and are looking for alternative ways for that.
The volumes we tested on are 160x160x160. But other dimensions should also be fine (better to be divisible by 2^4 due to the downsampling in the network). Different file formats are fine, but maybe you need to edit the dataloader a bit.
The masks are used solely during training. I think in high level, as long as you can sample 2D slices from a volume and know their position in the 3D volume. You can use them for training
Masks are not needed during inference. They are used in testing in the paper so that we can focus on the brain region for quantifying the results. But for real applications, masks are not needed during inference.
Hey there,
Great work! I have a few questions about the implementation details:
Thank you for your help and for sharing this work!
Best, Chiara