taigw / brats17

Brain tumor segmentation for MICCAI 2017 BraTS challenge
BSD 3-Clause "New" or "Revised" License
321 stars 130 forks source link

Regarding subimage patch and label size #23

Closed pranavkantgaur closed 5 years ago

pranavkantgaur commented 5 years ago

Hi,

I was curious as to how you have selected sub-image patch size of [19, 144, 144, 4]? Is it based on cross-validation? Further as already asked here: (https://github.com/taigw/brats17/issues/20) , why the corresponding label has 11 units along depth? [11, 144, 144, 1] as opposed to 19 in input sample?

wellescastro commented 5 years ago

+1

HowieMa commented 5 years ago

+1 That is so wired! I use the dataset of BRATS15, the original data shape is 155 240 240, but the sub image shape is 19 144 144.

According to his function , center_point = get_random_roi_sampling_center(volume_shape, sub_label_shape, batch_sample_model, boundingbox) #

and also sub_data_moda = extract_roi_from_volume(transposed_volumes[moda],center_point,sub_data_shape) it seems that your randomly get a sub image by cropping the original image, which I think may miss some information of the tumor. How do you guarantee this cropping method will get all the tumor information we want?

leigaoyi commented 5 years ago

I love this question, I run the MSNet and get 91.4% in whole tumor segmentation on BRATS2015 with this (19, 144, 144, 4), but I don't understand why 19 and 11.

HowieMa commented 5 years ago

I love this question, I run the MSNet and get 91.4% in whole tumor segmentation on BRATS2015 with this (19, 144, 144, 4), but I don't understand why 19 and 11.

Well, it is the problem of the model if self ! You could revise util/MSNet.py like

if __name__ == '__main__':
    x = tf.placeholder(tf.float32, shape = [1, 96, 96, 96, 1])
    y = tf.placeholder(tf.float32, shape = [1, 96, 96, 96, 2])
    net = MSNet(num_classes=2)
    predicty = net(x, is_training = True)
    print(x)
    print(predicty)
    print (Y)

ant run it like

python util/MSNet.py

You will find that the result is

shape .....
(1, 96, 96, 96, 1)
(1, 88, 96, 96, 2)
(1, 96, 96, 96, 2)

I hope this could help you solve the problem

leigaoyi commented 5 years ago

I love this question, I run the MSNet and get 91.4% in whole tumor segmentation on BRATS2015 with this (19, 144, 144, 4), but I don't understand why 19 and 11.

Well, it is the problem of the model if self ! You could revise util/MSNet.py like

if __name__ == '__main__':
    x = tf.placeholder(tf.float32, shape = [1, 96, 96, 96, 1])
    y = tf.placeholder(tf.float32, shape = [1, 96, 96, 96, 2])
    net = MSNet(num_classes=2)
    predicty = net(x, is_training = True)
    print(x)
    print(predicty)
    print (Y)

ant run it like

python util/MSNet.py

You will find that the result is

shape .....
(1, 96, 96, 96, 1)
(1, 88, 96, 96, 2)
(1, 96, 96, 96, 2)

I hope this could help you solve the problem

我又思考了一遍,沿着axial轴截了19(155),但是后续沿着coronal, sagittal方向,也是截了19,三个长方体叠起来,是不是覆盖了大部分范围。

HowieMa commented 5 years ago

I love this question, I run the MSNet and get 91.4% in whole tumor segmentation on BRATS2015 with this (19, 144, 144, 4), but I don't understand why 19 and 11.

Well, it is the problem of the model if self ! You could revise util/MSNet.py like

if __name__ == '__main__':
    x = tf.placeholder(tf.float32, shape = [1, 96, 96, 96, 1])
    y = tf.placeholder(tf.float32, shape = [1, 96, 96, 96, 2])
    net = MSNet(num_classes=2)
    predicty = net(x, is_training = True)
    print(x)
    print(predicty)
    print (Y)

ant run it like python util/MSNet.py You will find that the result is

shape .....
(1, 96, 96, 96, 1)
(1, 88, 96, 96, 2)
(1, 96, 96, 96, 2)

I hope this could help you solve the problem

我又思考了一遍,沿着axial轴截了19(155),但是后续沿着coronal, sagittal方向,也是截了19,三个长方体叠起来,是不是覆盖了大部分范围。

yeah! As you know, the shape of raw data is 155 240 240, but we only randomly select some data with shape of 19 144 144 from the raw data. This is because with a large iteration (like here is 20000), we can cover the whole data (155 240 240)probabilistically speaking.

I guess he did so because it could help save the memory use when training or testing!

taigw commented 5 years ago

To save memeory, the training and testing were based on image patches, not the entire image size. The convolution in the z-axis was based on 'valid' mode, that's why the output size is reduced by 8 in z-axis.