Closed hit421 closed 2 years ago
Hi, these are two totally different models. The "omnidata normal" network is trained to estimate normals from RGB. In the second output the network fails because the input image is not an RGB image but a scaled + shifted depth image that's been remapped to RGB space.
The "depth -> normal" network is just a little wrapper function we wrote to analytically estimate normals from the output of the depth network. It doesn't require any training, it's a classical approach :). It works reasonably well because (1) it's a fact about depth and (2) midas is trained with a gradient-matching loss that does something similar.
Hi, thanks for the answering. Is the wrapper function you mentioned in the answer included in the code provided? I am having troubles finding them.
@alexsax I have the same problem as @TeCai , can you provide the wrapper function "depht -> normal"? Thanks a lot!
Hello! I wonder that is the "Surface Normal Estimation" model same as the "Surface Normals Extracted from Predicted Depth" model? If they are not the same, could you provide the model for "Surface Normals Extracted from Predicted Depth"? If they're the same, I input as follows: The results in red are inconsistent. I would appreciate it if you could help me.