Open wzrf opened 6 years ago
I also find that the example input are gray-scale images, I want to know what kind of preprocess I should do on colorful images before I use them on the network?
hi, have you discovered anything? I tried to change the images on the samples folder to some of my images, but with the same format and resolution (1266x370) and also in grayscale, and I`m having the error:
luajit: ./main.lua:1124: bad argument #2 to '?' (sizes do not match at /home/lucas/torch/extra/cutorch/lib/THC/generic/THCTensorCopy.c:48) stack traceback: [C]: at 0x7f7d610de490 ./main.lua:1124: in main chunk [C]: at 0x00405d50
Tks.
That won't be a problem if your left and right images are in the same size and "png grayscale". I've tried for different image sets. Use the given command line and replace your image path and correct disparity: ./main.lua kitti fast -a predict -net_fname net/net_kitti_fast_-a_train_all.t7 -left left.png -right right.png -disp_max disp
Hello! I want to know how can I apply the trained network to images with other resolutions? I notice it's 1266x370, how can I apply this to middlebury dataset?