google / mannequinchallenge

Inference code and trained models for "Learning the Depths of Moving People by Watching Frozen People."
https://google.github.io/mannequinchallenge
Apache License 2.0
490 stars 104 forks source link

Output depth data format #8

Open arvkr opened 4 years ago

arvkr commented 4 years ago

Hi, Thanks for sharing the inference code. When the model infers the depth for the code from a single image, is the estimated depth in meters? What format is it exactly? Since the ground truth info is not there, I am not able to figure this out directly. Thank you.

fcole commented 4 years ago

The model estimates depth up to an unknown scale parameter, so the units themselves are not that meaningful. The error metrics we use for evaluation measure the accuracy of the depth map up to scale. This is a consequence of the training data (multi-view stereo) also having a scale ambiguity.

astro-fits commented 4 years ago

Hi fcole,
Do you mean that a depth map predicted by the pre-trained model is scaled by an unknown factor, in comparison with the "depth ground truth " ?

jasjuang commented 4 years ago

Hi, is the depth image predicted by the network a 32-bit continous floating-point image? Or is it just an 8-bit image?

fcole commented 4 years ago

Yes, the output is a floating-point value. Each output map is scaled by an unknown factor relative to the ground truth (i.e., it's not in units of meters or anything like that).

astro-fits commented 4 years ago

Thanks for your reply. I found that such scaling factor is correlated with the normalization of depth ground truth (i.e. normalized from 1 to 3 or from 1 to 10 meters) when I train a model. The factor is also increased with the enhancement of training epoch.