Closed renrenzsbbb closed 1 year ago
Here is an example config which loads an obj file https://github.com/NVlabs/nvdiffrec/blob/main/configs/spot_metal.json
The flag "base_mesh": "data/spot/spot.obj",
should point to your wavefront .obj file.
Given that you want to use this .obj file to train from a dataset of images and poses, next task is to place the loaded .obj file correctly using relevant transform. What transforms to apply depends on how you generated the poses, and I cannot give a generic answer, but it should be straightforward to transform the object to match your poses.
Thanks! I have solved it.
@renrenzsbbb Can you please let me know how you solved this?
Thanks for your great work. I use my custom data and extract the camera parameters by colmap according to https://github.com/NVlabs/instant-ngp/blob/master/docs/nerf_dataset_tips.md. I can reconstruct the mesh file and a series of texture. Moreover, the mesh surface is coarse. So I use colmap to reconstruct a more smooth mesh. However, when I load the obj file from colmap, nvdiffrec will render empty scene. I check the obj generated from colmap or from nvdiffrec and find that the mesh position and pose is different. So, how can I load a obj file from colmap to train the nvdiffrec. Thanks in advance.