Open jahad9819jjj opened 2 years ago
you should unsqueeze to left_image and right_image. model input assumes that the image is batch.
I updated to
@make_nograd_func def test_sample(left, right): model.eval() left=left.unsqueeze(0) right=right.unsqueeze(0) disp_ests, pred1_s3_up, pred2_s4 = model(left.cuda(), right.cuda()) return disp_ests[-1]
Also crop_w should be 960 as per save_disp example
I'm now receiving error
Calculated padded input size per channel: (2 x 18 x 32). Kernel size: (3 x 3 x 3). Kernel size can't be greater than actual input size
Also tried other unsqueeze sizes, but get a channel mismatch
Hi. I am thinking of applying your method to my own custom dataset. So, I added the following code to
save_disp.py
'smain
with reference todatasets/sceneflow_dataset.py
.Then I get the following error.
Probably this is due to wrong input to the preprocessing network.
how can I generate a disparity image with a custom dataset?