During preparing the HHA image from ScanNet, the depth image should be in 'meter'. As metioned in issue #7 , the depth image are divided by 7000. However, the official dataset claims that the depth shift is typically 1000.0 (see https://github.com/ScanNet/ScanNet/tree/master/SensReader). I tried both but the HHA images look weired (1000.0 looks better). Note that I use the python repo https://github.com/charlesCXK/Depth2HHA-python to get the HHA image and I have refined the depth image following the authors' instructions (this step is pretty okay).
I have two questions:
How do the authors get the HHA image? Is the depth image of ScanNet converted into 'meter'?
Which camera matrix do the authors use for ScanNet? Is the individual "intrinsic_depth.txt" or "intrinsic_color.txt" in each ScanNet scene?
I would appreciate the authors very much if this could be solved.
Thanks for your wonderful work!
During preparing the HHA image from ScanNet, the depth image should be in 'meter'. As metioned in issue #7 , the depth image are divided by 7000. However, the official dataset claims that the depth shift is typically 1000.0 (see https://github.com/ScanNet/ScanNet/tree/master/SensReader). I tried both but the HHA images look weired (1000.0 looks better). Note that I use the python repo https://github.com/charlesCXK/Depth2HHA-python to get the HHA image and I have refined the depth image following the authors' instructions (this step is pretty okay).
I have two questions:
I would appreciate the authors very much if this could be solved.
Best