zzzyuqing / DreamMat

[SIGGRAPH2024] DreamMat: High-quality PBR Material Generation with Geometry- and Light-aware Diffusion Models
MIT License
246 stars 9 forks source link

Different camera poses between dataset and blender #2

Closed Jiaozrr closed 1 month ago

Jiaozrr commented 1 month ago

Hi, thanks for your amazing work! When I try to run inference, I have some problems in pre_render. According to the code in "uncond.py", images in "gt" are rendered by the RayTracer module, while images in "depth", "normal" and "lights" are rendered by blender. It seems that "gt" have the different camera poses with the others. Is there any bug in c2w computing? These are some examples rendered from the cat.obj given in run_examples.sh. The first one is in the "gt" and the second in the "normal". I think the "gt" is the right one. normaltensor(0) 000

Jiaozrr commented 1 month ago

Got it! The problem is that I used bpy==3.6, since I cound not install bpy==3.2 by pip. If I use the local blender 3.2.2 as the writer said, images rendered by blender are correct, aligned with gt: 000 000 000_m0 0r0 0_env1