google-research / nerf-from-image

Shape, Pose, and Appearance from a Single Image via Bootstrapped Radiance Field Inversion
Apache License 2.0
381 stars 18 forks source link

Export shape #12

Open hcjghr opened 1 year ago

hcjghr commented 1 year ago

Hi!

First of all amazing work! I was wondering if there is a way to export the triangle mesh?

Thanks

dariopavllo commented 1 year ago

Hi,

I recommend that you use pymcubes for this process.

First, you need to generate a 3D grid (using torch.meshgrid) that spans the bounding volume of the scene, e.g. something like this:

num_steps = 128
values = torch.linspace(-dataset_config['scene_range'], dataset_config['scene_range'], num_steps)
coords = torch.stack(torch.meshgrid(values, values, values, indexing='xy'), dim=-1)

Then, you sample the SDF (not the final density!) at coords, and recover the triangle mesh by calling

vertices, triangles = mcubes.marching_cubes(-sdf, 0)

If you also want vertex colors, you can sample (again) the radiance field at the coordinates described by vertices.

Hope this helps!

TamirG765 commented 1 year ago

Hey,

There is any way to help to export a model? maybe a dae file

Thanks in advance