Open HenriMir opened 6 years ago
solved: just change the size of the images to 16 bit per channel, and save the resulting images with proper python library (I used write_png)
For your reference, if you have your own depth/normal data, you can load it in python and use tensorflow's built-in method to do the PNG encoding (that's what I did in my code). Note that in my code I encode normal data in RGB channel and depth data in alpha channel. Also for each channel I map the value from [-1,1] to [0,65535] and encode the image in 16-bit PNG.
Hi, I am trying to use the mesh reconstruction from depth/normal maps part of your pipeline but with my own depth and normal maps, but I keep getting image format error in the C++ code (wrong byte per pixel amount) how do I have to save images for them to be computed by your C++ code?
Thank you