Open bharadwajdhornala opened 4 years ago
Hi, do you mean running the "Fusion" part with your own depth and normal map? Here is my code to parse the depth and normal information from the image: https://github.com/happylun/SketchModeling/blob/master/Fusion/src/ReconstructMesh/MapsData.cpp#L177 Basically the input format is 16-bit image. It can be either single-channel image (only encoding depth information) or RGBA 4-channel image (encoding both normal and depth information). If it's RGBA image, the RGB channels encode the normal information (x,y,z coordinates are mapped from [-1,1) to [0, 32768) 16-bit value range) and the alpha channel encodes the depth information (depth is also mapped from [-1,1) to [0, 32768) 16-bit value range). If it's single-channel image, it only encodes depth information and the code will compute the normal from the depth map. The output images generated from the "Network" part should already encode the depth and normal information in the format mentioned above. However, if you are using your own depth map, you can either convert your image to the format I mentioned above, or you can also modify the code to read the data from your input format.
Hi @happylun I have tested the model on my own depth maps, but resulted nothing. I am using this image I have found the depths in your dataset are in 3-color-channel.
Can you forward the code for conversion of my depth image into your format.
Thanks in advance!!