EPFL-VILAB / omnidata

A Scalable Pipeline for Making Steerable Multi-Task Mid-Level Vision Datasets from 3D Scans [ICCV 2021]
Other
418 stars 53 forks source link

Question about reproduce the depth estimation results in the paper #43

Open Twilight89 opened 1 year ago

Twilight89 commented 1 year ago

Hello, I want to reproduce the depth estimation results of NYU_depth_V2 in the paper. But I found the output depth scale is totally different from the NYU data. What post-process am I supposed to do? Should I rescale the pred_depth or gt_depth?

Besides, I found that the depth scales are different between the v2 model and v1 model.

Could you give me some advice? Hope to hear from you. Thanks.

shwhjw commented 1 year ago

I have a similar issue in that I can't replicate the example depth images. I found that my depth results are reversed compared to what is shown in the examples.

mine: image

As you can see, the near/far colours are reversed. Anyone know how to fix it?

Edit: ignore this, I was generating my "inverted" depth images with a script included in sdfstudio that looks like it is an edited demo.py from omnidata. Running omnidata's demo.py results in the correct depth images for me.

elenacliu commented 1 year ago

@shwhjw hi, I was confused by whether the depth output is inverse or not. So you mean the omnidata repo itself output a not inverted depth map?

shwhjw commented 1 year ago

@elenacliu It's been a while but if I remember correctly I was using another repo (sdfstudio) based on this omnidata repo, and they both produced results that looked like the inverse of each other.

That said, I also asked this question on the stfstudio repo, and apparently the colour of the images don't matter as it's an accompanying .npy file that is actually used rather than the image itself. I still haven't gotten any of my own data working, but admittedly I haven't looked at it for a few months.