Kai-46 / PhySG

Code for PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Relighting and Material Editing
MIT License
221 stars 24 forks source link

why i can't get glossy results when training using blender images? #5

Open Jemmagu opened 2 years ago

Jemmagu commented 2 years ago

hi @Kai-46 , thank you for your excellent job!! However I encountered a problem when trying to training using my images. Acturally, I rendered images using blender renderer, and keep the training set the same format as yours kitty. The training process seems okay except it didn't get glossy results throughout the training process. Do you have any ideas why would this happen? Something like there's difference between mitsuba/blender or rendered-images need post-processing?

Really hope to get your reply, thx in advance!

Jemmagu commented 2 years ago

hi @Kai-46 , i am curious about the initial roughness. why you choose big init roughness rather than the small one? thx!

Kai-46 commented 2 years ago

Hi, can you post sample images here? Is your object glossy?

Jemmagu commented 2 years ago

Hi, yes the object is glossy. you mentioned scene normalization in Nerf++ data processing, so do i need to do camera normalization in PhySG? when rendering using blender, i have put the object in unit sphere, but not doing camera normalization. Rendering results (shape, material) seems ok except the glossy part, it's totally diffuse. So wondering what's the problem, maybe render using blender need extra attention or initial roughness matters, or something like that...

Kai-46 commented 2 years ago

Hi,

It's very hard for me to judge what could be problematic in your case without any visuals.

Can you try the following things to check if your camera poses are correct, and post the results?

1) inspect the camera epipolar geometry like the NeRF++ codebase did 2) visualize the camera and geometry like the NeRF++ codebase did. Note that this codebase requires slightly different normalization than NeRF++. Basically, NeRF++ puts all the cameras inside the unit sphere, while this one only puts the geometry inside the unit sphere (not necessarily the cameras) 3) use the run_colmap_posed.py script in NeRF++ to reconstruct the geometry using conventional COLMAP MVS

Jemmagu commented 2 years ago

when rendering using mitsuba, do you do some post-preprocessing operations? or just use the rendered w2c matrix and written like the json format?

Woolseyyy commented 2 years ago

I meet the same problem, any ideas?

Woolseyyy commented 2 years ago

Hi,

It's very hard for me to judge what could be problematic in your case without any visuals.

Can you try the following things to check if your camera poses are correct, and post the results?

  1. inspect the camera epipolar geometry like the NeRF++ codebase did
  2. visualize the camera and geometry like the NeRF++ codebase did. Note that this codebase requires slightly different normalization than NeRF++. Basically, NeRF++ puts all the cameras inside the unit sphere, while this one only puts the geometry inside the unit sphere (not necessarily the cameras)
  3. use the run_colmap_posed.py script in NeRF++ to reconstruct the geometry using conventional COLMAP MVS

Hi, @Kai-46 There is a parameter named "object_bounding_sphere" in .conf. Is it necessary to make sure object in unit sphere? Or just modify this parameter? I tried some complex scene and it seems difficult to change the size of them. As a result, I change this parameter but the results look strange.

Train using IDR: image

Train using PhySG: image

ThreeSRR commented 1 year ago

Hi, It's very hard for me to judge what could be problematic in your case without any visuals. Can you try the following things to check if your camera poses are correct, and post the results?

  1. inspect the camera epipolar geometry like the NeRF++ codebase did
  2. visualize the camera and geometry like the NeRF++ codebase did. Note that this codebase requires slightly different normalization than NeRF++. Basically, NeRF++ puts all the cameras inside the unit sphere, while this one only puts the geometry inside the unit sphere (not necessarily the cameras)
  3. use the run_colmap_posed.py script in NeRF++ to reconstruct the geometry using conventional COLMAP MVS

Hi, @Kai-46 There is a parameter named "object_bounding_sphere" in .conf. Is it necessary to make sure object in unit sphere? Or just modify this parameter? I tried some complex scene and it seems difficult to change the size of them. As a result, I change this parameter but the results look strange.

Train using IDR: image

Train using PhySG: image

Hi, @Woolseyyy

When trying complex scene, I got similar rendering results as yours. Have you solved this problem? Thanks a lot.