Open Jemmagu opened 2 years ago
hi @Kai-46 , i am curious about the initial roughness. why you choose big init roughness rather than the small one? thx!
Hi, can you post sample images here? Is your object glossy?
Hi, yes the object is glossy. you mentioned scene normalization in Nerf++ data processing, so do i need to do camera normalization in PhySG? when rendering using blender, i have put the object in unit sphere, but not doing camera normalization. Rendering results (shape, material) seems ok except the glossy part, it's totally diffuse. So wondering what's the problem, maybe render using blender need extra attention or initial roughness matters, or something like that...
Hi,
It's very hard for me to judge what could be problematic in your case without any visuals.
Can you try the following things to check if your camera poses are correct, and post the results?
1) inspect the camera epipolar geometry like the NeRF++ codebase did 2) visualize the camera and geometry like the NeRF++ codebase did. Note that this codebase requires slightly different normalization than NeRF++. Basically, NeRF++ puts all the cameras inside the unit sphere, while this one only puts the geometry inside the unit sphere (not necessarily the cameras) 3) use the run_colmap_posed.py script in NeRF++ to reconstruct the geometry using conventional COLMAP MVS
when rendering using mitsuba, do you do some post-preprocessing operations? or just use the rendered w2c matrix and written like the json format?
I meet the same problem, any ideas?
Hi,
It's very hard for me to judge what could be problematic in your case without any visuals.
Can you try the following things to check if your camera poses are correct, and post the results?
- inspect the camera epipolar geometry like the NeRF++ codebase did
- visualize the camera and geometry like the NeRF++ codebase did. Note that this codebase requires slightly different normalization than NeRF++. Basically, NeRF++ puts all the cameras inside the unit sphere, while this one only puts the geometry inside the unit sphere (not necessarily the cameras)
- use the run_colmap_posed.py script in NeRF++ to reconstruct the geometry using conventional COLMAP MVS
Hi, @Kai-46 There is a parameter named "object_bounding_sphere" in .conf. Is it necessary to make sure object in unit sphere? Or just modify this parameter? I tried some complex scene and it seems difficult to change the size of them. As a result, I change this parameter but the results look strange.
Train using IDR:
Train using PhySG:
Hi, It's very hard for me to judge what could be problematic in your case without any visuals. Can you try the following things to check if your camera poses are correct, and post the results?
- inspect the camera epipolar geometry like the NeRF++ codebase did
- visualize the camera and geometry like the NeRF++ codebase did. Note that this codebase requires slightly different normalization than NeRF++. Basically, NeRF++ puts all the cameras inside the unit sphere, while this one only puts the geometry inside the unit sphere (not necessarily the cameras)
- use the run_colmap_posed.py script in NeRF++ to reconstruct the geometry using conventional COLMAP MVS
Hi, @Kai-46 There is a parameter named "object_bounding_sphere" in .conf. Is it necessary to make sure object in unit sphere? Or just modify this parameter? I tried some complex scene and it seems difficult to change the size of them. As a result, I change this parameter but the results look strange.
Train using IDR:
Train using PhySG:
Hi, @Woolseyyy
When trying complex scene, I got similar rendering results as yours. Have you solved this problem? Thanks a lot.
hi @Kai-46 , thank you for your excellent job!! However I encountered a problem when trying to training using my images. Acturally, I rendered images using blender renderer, and keep the training set the same format as yours kitty. The training process seems okay except it didn't get glossy results throughout the training process. Do you have any ideas why would this happen? Something like there's difference between mitsuba/blender or rendered-images need post-processing?
Really hope to get your reply, thx in advance!