I noticed that your synthetic dataset contains depth images.
I would like to use depth information for my research purpose.
However, the scale of depth images is normalized [0, 255].
So, it is difficult to reconstruct XYZ information from your RGB-D dataset.
Could you tell me how to inverse 3-dimensional point clouds from RGB-D images?
I know the fisheye lens parameter that you mentioned in the previous issue.
Thank you for sharing your great work.
I noticed that your synthetic dataset contains depth images. I would like to use depth information for my research purpose.
However, the scale of depth images is normalized [0, 255]. So, it is difficult to reconstruct XYZ information from your RGB-D dataset.
Could you tell me how to inverse 3-dimensional point clouds from RGB-D images? I know the fisheye lens parameter that you mentioned in the previous issue.