gangweiX / IGEV

[CVPR 2023] Iterative Geometry Encoding Volume for Stereo Matching
MIT License
550 stars 66 forks source link

Need help to do inference for gray scale image #32

Open mkothule opened 1 year ago

mkothule commented 1 year ago

Want to run the network for gray scale images (single channel)

I get this error while running network on gray images Traceback (most recent call last): File "demo_imgs.py", line 100, in demo(args) File "demo_imgs.py", line 50, in demo image1 = load_image(imfile1) File "demo_imgs.py", line 29, in load_image img = torch.from_numpy(img).permute(2, 0, 1).float() RuntimeError: number of dims don't match in permute

I tried copying same gray values for all 3 channel but results are not very good.

I see eth3d is gray scale image dataset so I also tried with eth3d network shared. But I still get above error.

Can you please share what change is needed to adapt network to gray images?

gangweiX commented 1 year ago

can you give me your gray images?

mkothule commented 1 year ago

Currently I am using KITTIT images (RGB2Gray converted) for experimentation. 000000_10_image_3 png_gray 000000_10_image_2_gray

gangweiX commented 1 year ago

You can use KITTI pretrained model, that will perform well.

mkothule commented 1 year ago

thanks gangweiX. I see sensible output with kitti2015 pre-trained network for above images.