threestudio-project / threestudio

A unified framework for 3D content generation.
Apache License 2.0
6.32k stars 480 forks source link

extract SDF loss and viewpoint dependent samples from diffusion model #346

Closed git2andi closed 11 months ago

git2andi commented 11 months ago

Hello,

As far as I know, models like Dreamfusion, Fantasia3D, Magic3D and Magic123 refine their results using the loss from score distillation sampling (Dreamfusion) to validate the sampled views from each scene with the newly generated images from the same viewpoint using (frozen) stable diffusion. Unfortunately, I'm not that deep into coding yet, so I'm currently not able to figure out where this happens in this porject. My goal would be to also output the SDS loss along with the viewpoint dependent samples used from each model and sample diffusion next to the validation images. Is such a project feasible? And if so, could someone help with this or provide some guidance? Any feedback would help alot.

DSaurus commented 11 months ago

Hi @git2andi ,

You can view SDS loss in tensorboard when training. As for the viewpoint-dependent samples, you need to modify the compute_grad_sds function in the guidance file such as threestudio/models/guidance/stable_diffusion_guidance.py to visualize them.

git2andi commented 11 months ago

Thank you @DSaurus!