Closed LiuZhenyan-Wuzhong closed 3 years ago
Hi, please see this thread for how the normal/depth information is encoded in the image: https://github.com/happylun/SketchModeling/issues/15#issuecomment-640281599
Your normal map(groundtruth) from nearly top view: My normal map(groundtruth) from nearly top view:
Your normal map(groundtruth) from nearly bottom view: My normal map(groundtruth) from nearly bottom view:
Your normal map(groundtruth) from nearly right side view: My normal map(groundtruth) from nearly right side view:
They look from different type.
Really thank you.
Hello, I have used your TestingData to predict normal/depth maps and fused them together to make mesh model. Absolutely, it works well. Now, I am trying to training the network with my own input and ground truth(normal/depth map). I used blender to render the ground truth, but the result is different with your dataset(as below pictures). What's more, they makes no sense in the fusing part(as below pictures). I'd like to ask you about the method of your rendering the groundtruth. Or can you share me the type or criteria(to some degree, what I mean is the direction of every color meaning) of your normal graph and I am going to map my normal pictures to yours. Extremely thank you!!!