DeepSceneSeg / SSMA

TensorFlow model for training SSMA for multimodal semantic segmentation
http://deepscene.cs.uni-freiburg.de
GNU General Public License v3.0
85 stars 13 forks source link

Question about ScanNet HHA image preparation. #13

Closed RunpeiDong closed 2 years ago

RunpeiDong commented 2 years ago

Thanks for your wonderful work!

During preparing the HHA image from ScanNet, the depth image should be in 'meter'. As metioned in issue #7 , the depth image are divided by 7000. However, the official dataset claims that the depth shift is typically 1000.0 (see https://github.com/ScanNet/ScanNet/tree/master/SensReader). I tried both but the HHA images look weired (1000.0 looks better). Note that I use the python repo https://github.com/charlesCXK/Depth2HHA-python to get the HHA image and I have refined the depth image following the authors' instructions (this step is pretty okay).

I have two questions:

  1. How do the authors get the HHA image? Is the depth image of ScanNet converted into 'meter'?
  2. Which camera matrix do the authors use for ScanNet? Is the individual "intrinsic_depth.txt" or "intrinsic_color.txt" in each ScanNet scene?

I would appreciate the authors very much if this could be solved.

Best

RunpeiDong commented 2 years ago

I have solved this problem (other bugs but not algorithm), but I still wish to hear your answer about camera matrix choice.

Thanks very much.

RunpeiDong commented 2 years ago

Close this issue because of no response.