Closed a961009 closed 1 year ago
Here are more results at different scenes
https://github.com/ClementPinard/SfmLearner-Pytorch/blob/c2374d50f816e976c6889eb5e07198749d953bc6/run_inference.py#L66
should be
tensor_img = ((tensor_img/255 - 0.5)/0.5).to(device)
Hi, see #125 as to why we don't divide by 255.
You results are indeed a bit weird. I'll try to reproduce the results to see if something is wrong, will keep you updated.
Thanks for your quick reply. After dividing by 255, the results become reasonable as mentioned in original paper. I'll keep you updated, when i find something.
Thanks for the PyTorch codes! When I use the pretrained model (https://drive.google.com/drive/folders/1H1AFqSS8wr_YzwG2xWwAQHTfXN5Moxmx) to inference disp and depth on Kitti images, the disp and the depth result looks weird. The disp and the depth change very sharply. The main part of the disp is either 255 or 0, with almost no intermediate value.