niessner / Matterport

Matterport3D is a pretty awesome dataset for RGB-D machine learning tasks :)
https://niessner.github.io/Matterport/
MIT License
928 stars 153 forks source link

Camera Pose for Skybox #13

Closed Jiankai-Sun closed 5 years ago

Jiankai-Sun commented 6 years ago

Firstly, Thank you for your great work.

Because we want to calculate the 3d bounding box coordinates in Camera Coordination System for every panorama, we need the camera pose files for skybox (we follow the instructions here to reconstruct the panorama from skybox (https://github.com/yindaz/PanoBasic/blob/master/demo_matterport.m)). However, I can not find the camera pose files for skybox.

I wonder where is the camera pose file for skybox in the dataset. If Matterport3D doesn't provide camera pose file for skybox, is there any way to get the camera pose file for skybox from existing data?

Furthermore, given more accurate (vx/vy) calculated for upward/downward looking views, we also try to stitch 18 views (undistorted_color_images) together for a more complete panorama. However, we can not find the correct vx/vy from the camera_pose folder and the obtained panorama is wrong. Could you please give some suggestions about how to use the provided camera pose for panorama?

Thank you!

DBobkov commented 5 years ago

I did panorama stitching using skybox images some time ago. For stitching, I used PanoBasic toolbox. I had to adjust vx and vy in the script accordingly to reflect a certain offset in resulting panorama with respect to single cameras. I can share a sample code with you if you are interested. As a position of the resulting panorama, I used the averaged position of all 18 cameras for the sake of first approximation, as far as I can remember.

nbfuhao commented 5 years ago

@DBobkov did you manage to remove the exposure difference after stitching the 18 views using PanoBasic?

DBobkov commented 5 years ago

@Jiankai-Sun I didn't do anything specific to compensate for that but the obtained results already looked quite decent. See attached. matterportpanoramas

Jiankai-Sun commented 5 years ago

@DBobkov Sure. Thank you for your reply!

HalleyJiang commented 4 years ago

@DBobkov Hello DBobkov, your results look great. I have difficulties in stitching the 18 views. Can you explain how to convert the camera pose to vx and vy or share your code?

Ranqing commented 3 years ago

@DBobkov Hello DBobkov, your results look great. I have difficulties in stitching the 18 views. Can you explain how to convert the camera pose to vx and vy or share your code?

Hi @HalleyJiang, have you achieved perfect seamless panorama by stitching 18 views ?

dennisritter commented 3 years ago

I am also wondering how to convert the extrinsics to vx/vy used in PanoBasic to stitch the 18 view panoramas from color and depth images. Did anyone do this and can explain or share code?

nivesh48 commented 2 years ago

Hello @dennisritter, Even I'm trying to figure out the same thing. Can you tell more about this if you found out already ?

dennisritter commented 2 years ago

Hi @nivesh, I have found the UniFuse: Unidirectional Fusion for 360° Panorama Depth Estimation paper by Jiang et al. . They provide (MIT Licensed) code to perform the stitching in a Matterport3D preprocessing step with matlab. https://github.com/alibaba/UniFuse-Unidirectional-Fusion

nivesh48 commented 2 years ago

Thanks @dennisritter. Will let you know how it works

manurare commented 2 years ago

Hi @nivesh48 @dennisritter . In case you didn't know we have calculated skybox camera poses. We obtained 9684 poses out of the initial 10800.

They can be downloaded either using the matterport3D download script python download_mp.py -o [output directory] --task_data mp360 or here

Aitensa commented 2 years ago

Hi @manurare , I have learned about your dataset, but I still wonder that wherther the rgb panoramas come from the 18 part images?

manurare commented 2 years ago

Hi @Aitensa. No, they come from the 6 skybox images.

1742483876 commented 2 years ago

@manurare Thank you for your great job.However, when I back-projected with the color and depth information of the Matterport3D dataset, there was a problem of point cloud mismatch between the same room. The projection method refers to your open source code and the readme file of the dataset. Please help me to look at this problem. # cam2world -> Point x=[0,0,0] in camera cs to world cs x = np.array([0, 0, 0]) x_prime = rot @ x + C

2022-08-23 17-57-43屏幕截图 2022-08-23 17-57-25屏幕截图 .

manurare commented 1 year ago

Hi,

Sorry for the late reply, I forgot to answer. The poses we provide are in Blender coordinate system. Are you sure in Meshlab the mesh is correctly imported?