ellisdg / 3DUnetCNN

Pytorch 3D U-Net Convolution Neural Network (CNN) designed for medical image segmentation
MIT License
1.88k stars 648 forks source link

LiTS Dataset #120

Closed amorimdiogo closed 5 years ago

amorimdiogo commented 6 years ago

Hello! I have created a .h5 file with the LiTS Dataset with shape (140, 2, 128, 128, 128), with 140 volumes, 2 labels, a scan and a mask for each, 128 slices, and 128x128 px size. I adjusted the header of the train_isensee2017.py file acordingly. When I try to run it I get the error bellow when running the get_training_and_validation_generators() function. What could it be? Thanks in advanced!

Traceback (most recent call last):
  File "/home/albaroz/PycharmProjects/tese/3DUnetCNN-master/brats/train_isensee2017.py", line 105, in <module>
    main(overwrite=config["overwrite"])
  File "/home/albaroz/PycharmProjects/tese/3DUnetCNN-master/brats/train_isensee2017.py", line 101, in main
    augment_distortion_factor=config["distort"])
  File "/home/albaroz/PycharmProjects/tese/3DUnetCNN-master/unet3d/generator.py", line 57, in get_training_and_validation_generators
    validation_file=validation_keys_file)
  File "/home/albaroz/PycharmProjects/tese/3DUnetCNN-master/unet3d/generator.py", line 116, in get_validation_split
    nb_samples = data_file.root.data.shape[0]
  File "/home/albaroz/anaconda3/envs/tese/lib/python3.6/site-packages/tables/group.py", line 840, in __getattr__
    return self._f_get_child(name)
  File "/home/albaroz/anaconda3/envs/tese/lib/python3.6/site-packages/tables/group.py", line 712, in _f_get_child
    self._g_check_has_child(childname)
  File "/home/albaroz/anaconda3/envs/tese/lib/python3.6/site-packages/tables/group.py", line 399, in _g_check_has_child
    % (self._v_pathname, name))
tables.exceptions.NoSuchNodeError: group ``/`` does not have a child named ``data``
Closing remaining open files:/home/albaroz/PycharmProjects/tese/3DUnetCNN-master/brats/lits_data.h5...done
Creating validation split...

Process finished with exit code 1
ellisdg commented 6 years ago

tables.exceptions.NoSuchNodeError: group / does not have a child named data

The way I have the code set up, it expects the data file to have 3 groups: "data", "truth", and "affine". You can also add "subject_ids" to the data file as well. During training the generator will fetch the data from the "data" and "truth" groups in the h5 file.

Your problem is that your h5 file does not have the "data" group. It might not have the other groups as well, but missing the "data" group is what caused the error.

Of note, this is the same problem that @SnowRipple had in #95 when trying to create his own h5 file. I got confused when answering his question, and I'm pretty sure I totally messed him up!

amorimdiogo commented 6 years ago

So, I created an .h5 file with the following code:

data_set = create_dataset(data_dir, shorten=True)  # data_set.shape = (m, 2, 128, 128, 128)

h5f = h5py.File('lits_data.h5', 'w')
h5f.create_dataset('data', data=np.expand_dims(data_set[:, 0, :, :, :], 1))
h5f.create_dataset('truth', data=np.expand_dims(data_set[:, 1, :, :, :], 1))

Where the data_set (m, 2, 128, 128, 128) is converted into two arrays with shape (m, 1, 128, 128, 128) each, and saved under either data or truth.

I get this error:

Loading previous validation split...
Number of training steps:  1
Number of validation steps:  1
Epoch 1/500
Traceback (most recent call last):
  File "/home/albaroz/PycharmProjects/tese/3DUnetCNN-master/brats/train_isensee2017.py", line 120, in <module>
    main(overwrite=config["overwrite"])
  File "/home/albaroz/PycharmProjects/tese/3DUnetCNN-master/brats/train_isensee2017.py", line 114, in main
    n_epochs=config["n_epochs"])
  File "/home/albaroz/PycharmProjects/tese/3DUnetCNN-master/unet3d/training.py", line 88, in train_model
    early_stopping_patience=early_stopping_patience))
  File "/home/albaroz/anaconda3/envs/tese/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/home/albaroz/anaconda3/envs/tese/lib/python3.6/site-packages/keras/engine/training.py", line 2230, in fit_generator
    class_weight=class_weight)
  File "/home/albaroz/anaconda3/envs/tese/lib/python3.6/site-packages/keras/engine/training.py", line 1877, in train_on_batch
    class_weight=class_weight)
  File "/home/albaroz/anaconda3/envs/tese/lib/python3.6/site-packages/keras/engine/training.py", line 1476, in _standardize_user_data
    exception_prefix='input')
  File "/home/albaroz/anaconda3/envs/tese/lib/python3.6/site-packages/keras/engine/training.py", line 123, in _standardize_input_data
    str(data_shape))
ValueError: Error when checking input: expected input_1 to have shape (2, 128, 128, 128) but got array with shape (1, 128, 128, 128)
Closing remaining open files:/home/albaroz/PycharmProjects/tese/3DUnetCNN-master/brats/lits_data.h5...done

So I'm really struggling with the .h5 file shape, all I have are volumes (data) and segmentation masks (truth), so I can only feed it two layers... How should I save my data? Thanks!

ellisdg commented 6 years ago

I think this is a problem with your model configuration parameters. The model is expecting data with shape (2, 128, 128, 128), but the h5 file is feeding it data of shape (1, 128, 128, 128). If you only have one image type/modality (not including the segmentation image), then the h5 file is correct and the model is wrong. You need to delete the model file and create a new model with the input_shape variable set to (1, 128, 128, 128). If you are going off of one of my scripts for training the model, then you can change the input shape by setting config["input_shape"]=(1, 128, 128, 128)

love112358 commented 6 years ago

Hello. When I ran the code, I encountered the same problem: Traceback (most recent call last): File "/home/weijiaxu/pythoncode/3DUnetCNN-master/brats/train.py", line 115, in main(overwrite=config["overwrite"]) File "/home/weijiaxu/pythoncode/3DUnetCNN-master/brats/train.py", line 77, in main deconvolution=config["deconvolution"]) File "/home/weijiaxu/pythoncode/3DUnetCNN-master/unet3d/model.py", line 60, in unet_model_3d image_shape=input_shape[-3:])(current_layer) File "/home/weijiaxu/pythoncode/3DUnetCNN-master/unet3d/model.py", line 127, in get_up_convolution raise ImportError("Install keras_contrib in order to use deconvolution. Otherwise set deconvolution=False." ImportError: Install keras_contrib in order to use deconvolution. Otherwise set deconvolution=False. Try: pip install git+https://www.github.com/farizrahman4u/keras-contrib.git Closing remaining open files:/home/weijiaxu/pythoncode/3DUnetCNN-master/brats/brats_data.h5...done

Process finished with exit code 1

@albatroz95 @ellisdg @alkamid You said above that you need to add some code brats_data.h5.But I don't know how to write this program, and I don't know where to put it in that file. I hope I can get your help, so I'm just starting to learn deep learning.Can you share with me the code you wrote? Thank you

love112358 commented 6 years ago

I tried to create a new brats_data.h5 file under the brats folder, and then copied @albatroz95 'code into the brats_data.h5 file, as follows: data_set = create_dataset(data_dir, shorten=True) # data_set.shape = (m, 2, 128, 128, 128) h5f = h5py.File('lits_data.h5', 'w') h5f.create_dataset('data', data=np.expand_dims(data_set[:, 0, :, :, :], 1)) h5f.create_dataset('truth', data=np.expand_dims(data_set[:, 1, :, :, :], 1))

then run the program and report an error:

Traceback (most recent call last): File "/home/weijiaxu/pythoncode/3DUnetCNN-master/brats/train.py", line 115, in main(overwrite=config["overwrite"]) File "/home/weijiaxu/pythoncode/3DUnetCNN-master/brats/train.py", line 67, in main data_file_opened = open_data_file(config["data_file"]) File "/home/weijiaxu/pythoncode/3DUnetCNN-master/unet3d/data.py", line 93, in open_data_file return tables.open_file(filename, readwrite) File "/home/weijiaxu/anaconda3test/envs/tensorflow/lib/python3.5/site-packages/tables/file.py", line 320, in open_file return File(filename, mode, title, root_uep, filters, kwargs) File "/home/weijiaxu/anaconda3test/envs/tensorflow/lib/python3.5/site-packages/tables/file.py", line 784, in init self._g_new(filename, mode, params) File "tables/hdf5extension.pyx", line 489, in tables.hdf5extension.File._g_new tables.exceptions.HDF5ExtError: HDF5 error back trace

File "H5F.c", line 604, in H5Fopen unable to open file File "H5Fint.c", line 1087, in H5F_open unable to read superblock File "H5Fsuper.c", line 277, in H5F_super_read file signature not found

End of HDF5 error back trace

Unable to open/create file '/home/weijiaxu/pythoncode/3DUnetCNN-master/brats/brats_data.h5'

Process finished with exit code 1 I hope I can get your help Thank you @albatroz95 @ellisdg @alkamid

love112358 commented 5 years ago

When I ran the code, I encountered the same problem: @alkamid @ellisdg @albatroz95 I hope I can get your help Thank you Loading previous validation split... Number of training steps: 63 Number of validation steps: 10 Traceback (most recent call last): File "/tmp/pycharm_project_271/3DUnetCNN/brats/train.py", line 134, in main(overwrite=config["overwrite"]) File "/tmp/pycharm_project_271/3DUnetCNN/brats/train.py", line 116, in main augment_distortion_factor=config["distort"]) ValueError: too many values to unpack (expected 3) Closing remaining open files:/tmp/pycharm_project_271/3DUnetCNN/brats/brats_data.h5...done

Process finished with exit code 1

love112358 commented 5 years ago

Loading previous validation split... Number of training steps: 63 Number of validation steps: 10 Traceback (most recent call last): File "/tmp/pycharm_project_271/3DUnetCNN/brats/train.py", line 134, in main(overwrite=config["overwrite"]) File "/tmp/pycharm_project_271/3DUnetCNN/brats/train.py", line 116, in main augment_distortion_factor=config["distort"]) ValueError: too many values to unpack (expected 3) Closing remaining open files:/tmp/pycharm_project_271/3DUnetCNN/brats/brats_data.h5...done

Process finished with exit code 1

nysuka commented 5 years ago

i'm also trying to apply the isensee 3D unet model to the LiTS dataset and was wondering what results @albatroz95 and @love112358 or others are getting with this dataset and architecture. for the liver segmentation, i'm getting mean dice of 0.918 +/- 0.049, median of 0.927, min of 0.693, max of 0.955. even though the scores appear high, some of the edges do not match the manually segmented edges well for the whole liver. the liver tumors did not perform well at all. interested in suggestions to improve.