ashawkey / stable-dreamfusion

Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion.
Apache License 2.0
8.2k stars 721 forks source link

A Simple Question: can we using torch-ngp result to train a dmtet #201

Closed zeng-yifei closed 1 year ago

zeng-yifei commented 1 year ago

Hi, author.

Thanks for your great work in the repo! It really inspires many following researches and projects. There is no doubt that this repo has become a open-source milestone for text-to-3D area.

Here I have a dummy question to ask. Since the nerf backbone in torch-ngp seems the same with the backbone in this dreamfusion repo, can I use the training result of torch-ngp to further finetune a DMTet using --dmtet ?

More specificly speaking, does this repo made any particular change on the ngp network part that makes it different from the backbone in torch-ngp repo?

Hope for your reply :- )

ashawkey commented 1 year ago

@zeng-yifei Hi, I think it's possible with some efforts. The major difference is that we assume lighting direction is provided so we can do lambertian shading in dreamfusion, but in reconstruction (torch-ngp) we bake lighting into appearance and extraly append view directions as an input to MLP. There are also some small differences in the network arch (e.g., parameter name, different MLP dims). You can copy this repo's network to torch-ngp and use the reconstruction code there, but some modification will still be necessary.

zeng-yifei commented 1 year ago

Thank you for your quick reply and detailed explanation! I will give it a shot~