fabiocarrara / meye

A deep-learning-based web tool for translational and real-time pupillometry
https://www.pupillometry.it
GNU General Public License v3.0
40 stars 12 forks source link

RuntimeError: structure and input must have equal rank #38

Closed cancan101 closed 1 year ago

cancan101 commented 1 year ago

Describe the bug A clear and concise description of what the bug is.

When loading either a color or grayscale video (mp4 (mp4v) or avi (XVID)), I get the following traceback:

Traceback (most recent call last):
  File "/git/meye/predict.py", line 81, in <module>
    main(args)
  File "/git/meye/predict.py", line 53, in main
    (pupil_y, pupil_x), pupil_area = compute_metrics(pupil_map, thr=args.thr, nms=True)
  File "/git/meye/utils.py", line 27, in compute_metrics
    p = nms_on_area(p, s)
  File "/git/meye/utils.py", line 9, in nms_on_area
    labels, num_labels = label(x, structure=s)  # find connected components
  File "/lib/python3.10/site-packages/scipy/ndimage/_measurements.py", line 184, in label
    raise RuntimeError('structure and input must have equal rank')
RuntimeError: structure and input must have equal rank

~Also oddly, tqdm shows 1/1. Not sure if that should be number of frames.~

To Reproduce

Steps to reproduce the behavior:

  1. I am running python predict.py ~/Downloads/meye-segmentation_i128_s4_c1_f16_g1_a-relu.hdf5 ~/Desktop/output.avi using a video I created and the v1 model file from Github

Expected behavior A clear and concise description of what you expected to happen.

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

Additional context Using meye-2022-01-24.h5 model works.

Adding a print of the shape of x and in nms_on_area(x, s) when using meye-segmentation_i128_s4_c1_f16_g1_a-relu.hdf5:

(128, 128, 2) (3, 3)

as opposed to for meye-2022-01-24.h5:

(128, 128) (3, 3)
fabiocarrara commented 1 year ago

Thanks for reporting. It seems you are trying to use the old v0.1 model with the newer code.

You can either use the old pipeline by checking out the old code (tagged v0.1), or you have to modify the new code slightly.

Since old models also predict the glint area map (which was removed in the newer models), you can discard it by adding

pupil_map = pupil_map[:,:,0]

after https://github.com/fabiocarrara/meye/blob/909e2d0b3c0491a7ce3f80f87ad937d2017ba85f/predict.py#L51

If you also need the glint map and want to use the newer code, you can instead apply the same subsequent processing that is applied to pupil_map and extend the saved fields to contain also glint metrics. Let me know if that's the case and you encounter difficulties.

cancan101 commented 1 year ago

That makes sense. Given that, I do think the README should be updated to point to a new version of the pre-trained model as it currently links to the v1.

fabiocarrara commented 1 year ago

Totally agree. I've updated the README.