JuliaWolleb / Diffusion-based-Segmentation

This is the official Pytorch implementation of the paper "Diffusion Models for Implicit Image Segmentation Ensembles".
MIT License
271 stars 35 forks source link

original data 3d how to apply and best model checkpoint to reproduce the results #28

Closed saisusmitha closed 1 year ago

saisusmitha commented 1 year ago

Your data folder contains less 2D data( small set) but the original data(https://www.med.upenn.edu/cbica/brats2020/data.html) is 3d( lots of folders) so did you include how to run the complete original 3D data as I couldn't find it in the codes? also, can you kindly share the trained model checkpoint through which you got the published results?

saisusmitha commented 1 year ago

@JuliaWolleb kindly answer this

JuliaWolleb commented 1 year ago

Please read our paper: "We slice the 3D MR scans in axial slices. Since tumors rarely occur on the upper or lower part of the brain, we exclude the lowest 80 slices and the uppermost 26 slices. For intensity normalization, we cut the top and bottom one percentile of the pixel intensities. We crop the images to a size of (4, 224, 224). The provided ground truth labels contain four classes, which are background, GD-enhancing tumor, the peritumoral edema, and the necrotic and non-enhancing tumor core. We merge the three different tumor classes into one class and therefore define the segmentation problem as a pixel-wise binary classification. No data augmentation is applied." Store these 2D slices as suggested in the mini-example in the folder /data.