Open jennydaman opened 3 years ago
Traceback (most recent call last):
File "/opt/conda/bin/fetal_brain_assessment", line 8, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.8/site-packages/fetal_brain_assessment/__main__.py", line 6, in main
chris_app.launch()
File "/opt/conda/lib/python3.8/site-packages/chrisapp/base.py", line 462, in launch
self.run(self.options)
File "/opt/conda/lib/python3.8/site-packages/fetal_brain_assessment/fetal_brain_assessment.py", line 145, in run
volumes = [Volume(f) for f in input_files]
File "/opt/conda/lib/python3.8/site-packages/fetal_brain_assessment/fetal_brain_assessment.py", line 145, in <listcomp>
volumes = [Volume(f) for f in input_files]
File "/opt/conda/lib/python3.8/site-packages/fetal_brain_assessment/volume.py", line 51, in __init__
pad[:data.shape[0], :data.shape[1], :data.shape[2]] = data
ValueError: could not broadcast input array from shape (77,70,67,1) into shape (77,70,60,1)
https://github.com/FNNDSC/pl-fetal-brain-assessment/blob/04adb7d08ced030bcd670e05ddb7ca4a60dc69ff/fetal_brain_assessment/volume.py#L48-L49
Input is required to have very specific dimensions. Not sure about how dimensions of the image affect model evaluation.