@SenthilCaesar , I think training part is not properly streamlined with prediction part. I trained a model, and then tried to predict from that model, however, ran into the following error:
Saving data to disk...
Pre-Processing Time Taken : 1.7 min
Loading sagittal model from disk...
Traceback (most recent call last):
File "../pipeline/dwi_masking.py", line 713, in <module>
dwi_mask_sagittal = predict_mask(cases_file_s, trained_model_folder, view='sagittal')
File "../pipeline/dwi_masking.py", line 139, in predict_mask
loaded_model.load_weights(trained_folder + '/weights-' + view + '-improvement-' + optimal + '.h5')
File "/rfanfs/pnl-zorro/software/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/keras/engine/network.py", line 1157, in load_weights
with h5py.File(filepath, mode='r') as f:
File "/rfanfs/pnl-zorro/software/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/h5py/_hl/files.py", line 408, in __init__
swmr=swmr)
File "/rfanfs/pnl-zorro/software/pnlpipe3/miniconda3/envs/dmri_seg/lib/python3.6/site-packages/h5py/_hl/files.py", line 173, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 88, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = 'data/model_folder_test/weights-sagittal-improvement-09.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
Provided models
(dmri_seg) [tb571@pnl-oracle tests]$ ls ../model_folder/
CompNetBasicModel.json weights-axial-improvement-08.h5 weights-sagittal-improvement-09.h5
IITmean_b0_256.nii.gz weights-coronal-improvement-08.h5
After training
(dmri_seg) [tb571@pnl-oracle tests]$ ls data/model_folder_test/
CompNetBasicModel.json IITmean_b0_256.nii.gz sagittal-compnet_final_weight.h5 weights-sagittal-improvement-01.h5
The difference in number of files in the above tells us something is incomplete.
TODO
I streamlined the testing process for this software. Please use my process for testing your commit. Create a new branch like below:
ssh pnl-maxwell or pnl-oracle or grx03
source /path/to/pnlpipe3/CNN-Diffusion-MRIBrain-Segmentation/train_env.sh
# I trust you to know /path/to/
cd /path/to/pnlpipe3/CNN-Diffusion-MRIBrain-Segmentation/
git checkout -b fix-training
cd /path/to/pnlpipe3/CNN-Diffusion-MRIBrain-Segmentation/tests
./pipeline_test.sh
You may look into /path/to/pnlpipe3/CNN-Diffusion-MRIBrain-Segmentation/tests/data folder for details.
@SenthilCaesar , I think training part is not properly streamlined with prediction part. I trained a model, and then tried to predict from that model, however, ran into the following error:
Provided models
After training
The difference in number of files in the above tells us something is incomplete.
TODO
I streamlined the testing process for this software. Please use my process for testing your commit. Create a new branch like below:
You may look into
/path/to/pnlpipe3/CNN-Diffusion-MRIBrain-Segmentation/tests/data
folder for details.Please fix it.