Closed amh28 closed 6 years ago
Hi @amh28 !
If you download NiftyNet source code (assuming folder NiftyNet/
), also download and unzip image data into these folders
NiftyNet/data/PROMISE2/TrainingData_Part1/
NiftyNet/data/PROMISE2/TrainingData_Part2/
NiftyNet/data/PROMISE2/TrainingData_Part3/
Running
cd NiftyNet/ # go to the source code folder
pip install tensorflow==1.3 # install TensorFlow
pip install -r requirements-gpu.txt # install NiftyNet dependencies
pip install SimpleITK # install `SimpleITK` (required for reading the files)
python net_segment.py train -c demos/PROMISE12/promise12_demo_train_config.ini
will train dense_vnet from scratch.
The configuration file specifies to use the dense_vnet implementation here, and the Dice loss.
Please find more info on the command here:
https://github.com/NifTK/NiftyNet/blob/dev/config/README.md
I'm closing this issue now but please feel free to reopen or ask questions on StackOverflow niftynet tag https://stackoverflow.com/questions/tagged/niftynet.
Hello, I did as you said and since this dataset has 13 different segmentation classes I did some modifications to the configuration file in order to perform multi-class segmentation:
`[promise12] csv_file = demos/TrainingAb/file_list.csv spatial_window_size = (104, 104, 80) interp_order = 3 pixdim = (1.0, 1.0, 1.0) axcodes=(A, R, S)
[label] csv_file = demos/TrainingAb/file_list_seg.csv spatial_window_size = (104, 104, 80) interp_order = 0 pixdim = (1.0, 1.0, 1.0) axcodes=(A, R, S)
############################## system configuration sections [SYSTEM] cuda_devices = "" num_threads = 4 num_gpus = 1 model_dir = ./trainingab_model
[NETWORK] name = dense_vnet activation_function = prelu batch_size = 1
volume_padding_size = 0
histogram_ref_file = histogram.txt norm_type = percentile cutoff = (0.01, 0.99) normalisation = True whitening = True normalise_foreground_only=True foreground_type = otsu_plus multimod_foreground_type = and window_sampling = resize queue_length = 8
[TRAINING] sample_per_volume = 4 rotation_angle = (-10.0, 10.0) scaling_percentage = (-10.0, 10.0) random_flipping_axes= 1 lr = 0.00002 loss_type = Dice starting_iter = 0 save_every_n = 500 max_iter = 25000 max_checkpoints = 50
[INFERENCE] border = (0,0,0) save_seg_dir = output_dense_vnet/ output_interp_order = 0 spatial_window_size = (104,104,80)
############################ custom configuration sections [SEGMENTATION] image = promise12 label = label output_prob = True num_classes = 14 label_normalisation = True
min_sampling_ratio = 0.000001`
also, this is the histogram.txt specified in the NETWORK section:
promise12 0.0 47.4894 56.5955 57.1504 57.8215 59.5325 65.8625 69.1147 71.1825 72.2834 73.6093 76.813 100.0 labelulabelfrom 0 1 2 3 4 5 6 7 8 9 10 11 12 13 labelulabelto 0 1 2 3 4 5 6 7 8 9 10 11 12 13
Finally after performing the inference stage, I wanted to visualize the segmentation results: 01_niftynet_out.nii.gz (which is the first result) using a Matlab extension for visualizing .nii files: https://www.mathworks.com/matlabcentral/fileexchange/47072-3d-nifti-data-viewer
but it gives me some error about exceeding matrix dimensions:
`t1 =
struct with fields:
hdr: [1×1 struct]
filetype: 2
fileprefix: 'examples/01_niftynet_out'
machine: 'ieee-le'
img: **[5-D double]**
original: [1×1 struct]
Index exceeds matrix dimensions.`
Apparently the obtained segmentation image has shape: [5-D double]. Nevertheless when I used this same extension for visualizing some of the labels I used for training it gave me this output:
hdr: [1×1 struct] filetype: 2 fileprefix: 'examples/1label' machine: 'ieee-le' img: **[512×512×147 uint8]** original: [1×1 struct]
meaning my labels have shape [512×512×147 uint8]. Shouldn't my inference data have the same shape and not that [5-D double] shape?
Maybe my specifications in the configuration file for this specific dataset are not correct and that is why I am obtaining some corrupted segmentation results, maybe my interp_order, pix_dim parameters are not okay?
Some help on this would be very appreciated.
Hi @amh28 If I remember correctly, setting output_prob
to False, the output's shape would be [x, y, z, 1, 1]; setting it to True gives [x, y, z, 1, k] where k is the number of classes.
@amh28 Hello,I also want to study the NiftyNet, But when I download the " Multi-Atlas Labeling Beyond the Cranial Vault" dataset, I found the dataset is incorrect. Could you give me the dataset?Thank you very much !!!
the dataset likes this now:
Hello, the NiftyNet paper "https://arxiv.org/pdf/1709.03485.pdf" references abdominal organ segmentation experiments using the " Multi-Atlas Labeling Beyond the Cranial Vault" dataset. I already downloaded the dataset and I would like to train it from scratch using the dense_vnet architecture, but I could not find it in the repository. Also the paper references an implementation of an special dice loss function for this multi-class segmentation in the configuration file but I could not find it. Can you give me any insights on this?. Thank you