omniobject3d / OmniObject3D

[ CVPR 2023 Award Candidate ] OmniObject3D: Large-Vocabulary 3D Object Dataset for Realistic Perception, Reconstruction and Generation
462 stars 12 forks source link

About SfM-wo-bg and SfM-w-bg #44

Closed LiuXD1011 closed 3 months ago

LiuXD1011 commented 4 months ago

Hello,

I would like to ask, before novel view synthesis, should I use COLMAP first and then remove the background from the images, or should I remove the background from the images first and then use COLMAP? Or are there other steps to achieve this effect? Thank you for your answer!

the original: “The SfM-wo-bg and SfM-w-bg settings use images sampled from iPhone videos and camera parameters generated by COLMAP. The difference between them is whether the background is included.”

omniobject3d commented 3 months ago

Hi, you can just use the camera poses provided by our datasets, which would apply for both the w/ and w/o mask images. If you want to create your own camera poses via SfM, I would recommend to use COLMAP first and then remove the background to assure more information left in the scene for feature matching during SfM.

LiuXD1011 commented 3 months ago

Thank you for your response!! I used the mask from the dataset you provided to remove the background from the images in the images and images_8 folders, and then used the corresponding pose for novel view synthesis.The library I am using is : https://github.com/yenchenlin/nerf-pytorch. The training parameters are all default. But the images I rendered are all black, I don't know why. May I ask if you can give me some tips and guidance? Thank you very much! Here is the file I have processed: aaaB

QQ_1722335519393

QQ_1722335532328

11111111111

wutong16 commented 3 months ago

Hi, we need to first figure out whether this is caused by an incorrect camera coordinate system or the overfitting to background color.

Could you please try to use the same code and camera poses and test it on the original images, where the background is not removed? If it works well, then the previous issue is probably due to the overfitting to the black background during training since the background area is too large. Try to use the mask and re-weight the MSE loss scale of the background, for example, loss_within_mask + \lambda * loss_outside_mask, where you can try 0.1~0.5 for \lambda

LiuXD1011 commented 3 months ago

The cause of this problem is indeed overfitting. Thank you for your suggestion, and the issue has now been resolved. Thank you very much!