Closed fangchuan closed 2 years ago
--corr_implementation alt
saves memory but is slower, although you shouldn't need it for individual 640 x 400 resolution images. Timings in the paper were measured using --corr_implementation reg_cuda
, using an NVIDIA RTX-2080ti
--corr_implementation alt
saves memory but is slower, although you shouldn't need it for individual 640 x 400 resolution images. Timings in the paper were measured using--corr_implementation reg_cuda
, using an NVIDIA RTX-2080ti
thank your reply. I follow your advice to adjust the --corr_implementation reg_cuda
command, the runtime of raftstereo-middlebury.pth
on Tesla P100 is about 400ms.
Another problem is about model raftstereo-realtime.pth
, I load this model but consistently run into Errors when loading state_dict, the github page gives no clue about how to run the realtime model, can you help me?
Are you able to run
python demo.py --restore_ckpt models/raftstereo-realtime.pth --shared_backbone --n_downsample 3 --n_gru_layers 2 --slow_fast_gru --valid_iters 7 --corr_implementation reg_cuda --mixed_precision
from the README? If not, can you post the error message?
Closing this issue due to inactivity. Feel free to reopen if you still have problems or questions.
Hi, thank you for sharing the awesome work. I have run the basemodel
raftstereo-middlebury.pth
without any refinement on my costume dataset. The precision of results are pretty good, But the runtime of model prediction seems does not match that described in the article. I add some time-analysis snippet indemo.py
:The command predicts stereo on custome data is :
python demo.py --restore_ckpt models/raftstereo-middlebury.pth -l=output_xvisio/rect_cam0/*.jpg -r=output_xvisio/rect_cam1/*.jpg --corr_implementation alt --mixed_precision
The configuration of my local machine:
The configuration of my server machine:
Could you help me to figure out what mistake I made caused this problem? I really appreciate your help!