Open BingyuanW opened 2 years ago
Hi,
I've created an inference script that allows to get the results on a single image.
You can run it as follows (after cloning the repo and pip installing the requirements):
# inference with model trained on NYUv2
!python ./code/inference.py \
--gpu_or_cpu cpu \
--image_path path_to_your_image \
--save_visualize \
--ckpt_dir path_to_checkpoint
I'm getting the following:
It seems correct but the colors are reversed.
Hi,
The difference between the results in our paper and your visualization seems to be if the depth values are divided by the maximum value of the depth map or not. This is also considered in the inference script provided by NeilsRogge, while we used matplotlib for visualization and the colormap 'jet' was used.
cmapper = matplotlib.cm.get_cmap('jet')
Furthermore, for qualitative comparisons in the paper, we divide the depth maps by the maximum value of all the compared depth maps (in a row), so that the results of different methods can be compared more obviously.
Hi, I think I have figured it out:
When I visualize the depth map you provide , it is like this:
I notice the red color are all in the top of the image, so, I crop it out:
We get the correct visualization now !!!
The area I crop out is not supervised by the GT, so the value here may be strange. For qualitative comparisons, what matters is not the color itself but the difference between different depth values.
Hope for further discussion.
Hi,
The difference between the results in our paper and your visualization seems to be if the depth values are divided by the maximum value of the depth map or not. This is also considered in the inference script provided by NeilsRogge, while we used matplotlib for visualization and the colormap 'jet' was used.
cmapper = matplotlib.cm.get_cmap('jet')
Furthermore, for qualitative comparisons in the paper, we divide the depth maps by the maximum value of all the compared depth maps (in a row), so that the results of different methods can be compared more obviously.
Do you agree with BingyuanW‘s point . Compared with the results of other papers, does this treatment affect the fairness of the experiment?
I also notice a weird horizontal artifact on the top edge of every image. Any idea as to why this is happening ?
Hi, I think I have figured it out:
When I visualize the depth map you provide , it is like this:
I notice the red color are all in the top of the image, so, I crop it out:
We get the correct visualization now !!!
The area I crop out is not supervised by the GT, so the value here may be strange. For qualitative comparisons, what matters is not the color itself but the difference between different depth values.
Hope for further discussion.
Thank you! Which file did you edit like that?
Thanks for your great work!
I have colored your predicted depth maps, but it is quite diffirent from the result in your paper. It is here something wrong? yours: mine: