Closed AlbertoRemus closed 2 years ago
Hi,
I am not sure what you mean by up-to-scale
. In the preprocessing step for computing the T-SDF, we normalize the meshes (zero center and in unit-cube) following the DISN repo in this function: https://github.com/Xharlie/DISN/blob/master/preprocessing/create_point_sdf_grid.py#L169-L198.
Hi @yccyenchicheng, thanks for your reply, I was asking: given an RGB image is the method capable of reconstructing the 3D model with the correct scale or not? To be more clear is the scale of the estimated 3D model close enough to the ground truth one?
Hi @AlbertoRemus, thank you for the explanation!
I think it can't since we normalize all the meshes during training. So it predicts the 3D model at the normalized scale.
@yccyenchicheng thanks for your answer! Hence, I'm gonna close the issue having the information I needed
Hello, I find this work really interesting.
I would like to ask a question about the scaling factor of the reconstructed 3D mesh: in this framework is this model reconstructed up-to-scale or it is capable to retrieve it with the correct scaling factor?