Is there any normalization being performed on the depth/pointmap to make it scale invariant?
Also I've been following what the MetricV2, have you all looked at including surface normals as supervision based on pointmap -> depthmap -> surface normal conversion such that the network can also produce surface normals?
As I understand here https://github.com/naver/dust3r/issues/44#issuecomment-1999227727 ya'll are in the process of training a metric version of dust3r, so the current version outputs scale invariant. Was the depth normalized before training? I see here https://github.com/naver/dust3r/blob/198c00910c2b7d009397590017e76bd58170dc02/dust3r/datasets/utils/cropping.py#L54 that the resolution is being changed. But I also notice here https://github.com/naver/dust3r/blob/198c00910c2b7d009397590017e76bd58170dc02/datasets_preprocess/preprocess_co3d.py#L190 that it seems the metric depth map is being loaded?
Is there any normalization being performed on the depth/pointmap to make it scale invariant?
Also I've been following what the MetricV2, have you all looked at including surface normals as supervision based on pointmap -> depthmap -> surface normal conversion such that the network can also produce surface normals?