Open reynoldscem opened 1 year ago
Our experiment mainly focused on NeRF generation. You might follow the same instructions as in threestudio to export meshes. But we found that directly exporting mesh from implicit-volume using threestudio does not look well, so a second-stage refinement (e.g. DMTet) might be needed for that.
@seasonSH
If I run the script without the shading version, I will not be able to export the mesh even if I follow threestudio's instructions(with shading version is ok).
The error message is as follows:
Traceback (most recent call last):
File "launch.py", line 237, in <module>
main(args, extras)
File "launch.py", line 195, in main
trainer.predict(system, datamodule=dm, ckpt_path=cfg.resume)
File "/media/ssd1/sangwon/anaconda3/envs/mvdream/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 852, in predict
return call._call_and_handle_interrupt(
File "/media/ssd1/sangwon/anaconda3/envs/mvdream/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 43, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/media/ssd1/sangwon/anaconda3/envs/mvdream/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 894, in _predict_impl
results = self._run(model, ckpt_path=ckpt_path)
File "/media/ssd1/sangwon/anaconda3/envs/mvdream/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 946, in _run
self._checkpoint_connector._restore_modules_and_callbacks(ckpt_path)
File "/media/ssd1/sangwon/anaconda3/envs/mvdream/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py", line 400, in _restore_modules_and_callbacks
self.restore_model()
File "/media/ssd1/sangwon/anaconda3/envs/mvdream/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py", line 280, in restore_model
trainer.strategy.load_model_state_dict(self._loaded_checkpoint)
File "/media/ssd1/sangwon/anaconda3/envs/mvdream/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 364, in load_model_state_dict
self.lightning_module.load_state_dict(checkpoint["state_dict"])
File "/media/ssd1/sangwon/anaconda3/envs/mvdream/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for MVDreamSystem:
Unexpected key(s) in state_dict: "material.ambient_light_color", "material.diffuse_light_color".
Thanks
looks like the material settings are different in config file. Make sure it is the same as training
Hi @reynoldscem, Did you figure out to export the mesh?
@lakpa-tamang9 yes, I'm pretty sure it just worked out of the box. I was asking for recommended settings, and I'm not sure what I ended up going with.
Our experiment mainly focused on NeRF generation. You might follow the same instructions as in threestudio to export meshes. But we found that directly exporting mesh from implicit-volume using threestudio does not look well, so a second-stage refinement (e.g. DMTet) might be needed for that.
Hi @seasonSH , Can you please elaborate more on how this could be done?
Hi, I was wondering what settings you would recommend for exporting textured models?