autonomousvision / unimatch

[TPAMI'23] Unifying Flow, Stereo and Depth Estimation
https://haofeixu.github.io/unimatch/
MIT License
980 stars 102 forks source link

The time required for stereo matching #27

Closed ynl-kusama closed 1 year ago

ynl-kusama commented 1 year ago

Hello! I am currently trying to integrate ROS system and unimatch, but the time required for stereo matching is taking more than 3 seconds and does not run as smoothly as the demo video. My input image is 1920x1080 in size and I tried reducing the resolution, but it takes about the same amount of time. Is there any point to increase the frame rate?

haofeixu commented 1 year ago

Hi, you can try to use smaller inference size for inference: https://github.com/autonomousvision/unimatch/blob/master/scripts/gmstereo_demo.sh#L7

ynl-kusama commented 1 year ago

Thank you for your reply! It is true that if I reduce the inference_size, it takes less time to stereo matching. For reference, could you please tell us the inference_size setting in your demo video?

haofeixu commented 1 year ago

Hi, for our demo on huggingface, we use a maximum inference size of 640x960: https://huggingface.co/spaces/haofeixu/unimatch/blob/main/app.py#L46 For inference on the full-resolution Middlebury dataset, we resize the images to 1024x1536: https://github.com/autonomousvision/unimatch/blob/master/scripts/gmstereo_demo.sh#L7

ynl-kusama commented 1 year ago

Thank you for your kind attention!