EPFL-VILAB / omnidata

A Scalable Pipeline for Making Steerable Multi-Task Mid-Level Vision Datasets from 3D Scans [ICCV 2021]
Other
418 stars 53 forks source link

How to clip predictions for evaluation #34

Closed jeeyung closed 1 year ago

jeeyung commented 1 year ago

Hello Again!

You mentioned that you clip at whatever the maximum nonmissing depth is. I found that you clamp outputs into range [0,1] in the codes when testing. https://github.com/EPFL-VILAB/omnidata/blob/318a75569934737e67902f903531324d1f48ae8f/paper_code/test_depth.py#L207 Is it what I should follow to get the similar performance to yours? I got very low performance in several metrics for depth estimation :(

I thought that clipping is used only for visualization. If we clip the prediction and ground truth into range [0,1], isn't it that we ignore too many things in one image input, e.g. (1,128) ? I can't see there is a normalization used for depth estimation rather than RGB input.

Originally posted by @jeeyung in https://github.com/EPFL-VILAB/omnidata/issues/32#issuecomment-1368104028

jeeyung commented 1 year ago

I used the different transform functions in omnidata_tools/torch.dataloader/transforms.py. The transform functions are different from paper_code/data/transforms.py. I'll close the issue as I found what's wrong with my implementation.

ewrfcas commented 1 year ago

@jeeyung Hi, do you know how to get the real depth from the normalized one? (1-pred_depth)*128 is this right?

Twilight89 commented 1 year ago

@jeeyung Hi, do you know how to get the real depth from the normalized one? (1-pred_depth)*128 is this right?

Hi, did you solve this problem? How to get real depth from the normalized one.

Besides, I found the metric code a little strange, which differs from the one I saw in the MidaS repo.