NVlabs / neuralangelo

Official implementation of "Neuralangelo: High-Fidelity Neural Surface Reconstruction" (CVPR 2023)
https://research.nvidia.com/labs/dir/neuralangelo/
Other
4.33k stars 387 forks source link

Mesh bad even though training shows good results #95

Closed deeepwin closed 2 months ago

deeepwin commented 1 year ago

I have trained on DTU65 with good results. I am now training on custom data - a scan of a Logitech mouse - with latest commit. The training looks gut after 500k steps, with nice normals and rendering:

image

Nevertheless, when I mesh it according to instructions, the output is completely wrong, as you can see here:

mesh

The bounding sphere could be optimized, but as training appears successful, there seems to be something wrong with the meshing.

preprocessing

What could be the problem here? Is there a setting in mesh generation that need to be adjusted? I have seen that others had issue with meshing from particular checkpoint. Here my code:

# mouse-2
EXPERIMENT=custom/mouse-2
CHECKPOINT=logs/neuralangelo/mouse-2/epoch_01683_iteration_000500000_checkpoint.pt
OUTPUT_MESH=mouse-2_mesh.ply
RESOLUTION=2048
BLOCK_RES=128

# generate mesh
torchrun projects/neuralangelo/scripts/extract_mesh.py \
    --config=${CONFIG} \
    --checkpoint=${CHECKPOINT} \
    --output_file=${OUTPUT_MESH} \
    --resolution=${RESOLUTION} \
    --block_res=${BLOCK_RES} \
    --textured

Any help would be appreciated.

Dragonkingpan commented 1 year ago

The estimated camera pose during the colmap stage for things with no surface features is inaccurate, so the generated image is very poor

deeepwin commented 1 year ago

The estimated camera pose during the colmap stage for things with no surface features is inaccurate, so the generated image is very poor

But the poses look actually quite accurate as they move very consistently. Also the rendered image for evaluation looks very close to the target. Or, which generated image do you mean is very poor?

dolcelee commented 1 year ago

I met the same problem. I imported my colmap data into the blender and the bounding sphere looks tight, but the generated mesh is not even close to my data. The mesh was much better before they update the texture fix commit.

mli0603 commented 1 year ago

Hi @dolcelee

Can you:

iam-machine commented 1 year ago

Same problem. Used default settings, manually adjusted the bounding box in Blender (though it had free space here and there - if I would make it smaller I would cut out important pieces of point cloud). The val/vis/rgb_render on W&B looked decent, I wasn't waiting for full 500k iters and exported mesh somewhere at 350k. It looked like a huge box full of colourful blobs (the initial video was a street with road and houses).

dolcelee commented 1 year ago

@mli0603

deeepwin commented 1 year ago

@mli0603

I have also added my example to be able to reproduce:

https://1drv.ms/u/s!AtwBlzVMECHC4m1cGAxKqqd_mTKB?e=zVayUR

deeepwin commented 1 year ago

I have retried and it seems to work now (with commit b772282d26f62064401b1f4f0d53eefe908afdb3). I do not know why, but I did the following things:

  1. I generated config file with: python3 projects/neuralangelo/scripts/generate_config.py --sequence_name mouse-2 --data_dir $DATA_PATH --scene_type object, instead of using custom/template.yaml.
  2. I ensured that the bounding sphere is fitting tight around the object and updated the center point and scaling of sphere in the generated config file. After retraining, I could generate a decent mesh using extract_mesh.py

I have written a Notebook that guides you through the process of preparing custom made dataset: Notebook

iam-machine commented 1 year ago

@deeepwin Hi! If it works, could you please show the quality of exported mesh?

deeepwin commented 1 year ago

@iam-machine Yes, sure, here my example, textured and without:

image image

mli0603 commented 1 year ago

Hi @deeepwin @dolcelee

Thank you for sharing the useful info! We are looking into this issue. Will update!

mli0603 commented 1 year ago

Hi @dolcelee

On my end, even before the commit you provided, I am getting the same mesh results. I want to confirm that you are getting two different meshes with the same checkpoint. Is this the case?

dolcelee commented 1 year ago

@mli0603

I retrained after the commit c91af8d5098c858df8e8dfa35fba8666d314782b since it need to be retrained. Therefore, it's not the same checkpoint which could generates good mesh. But the data and colmap processed files was same.

dolcelee commented 1 year ago

@mli0603

I followed deepwin's method and did the whole process all over again, start at the colmap part. I don't know why but I do get a decent mesh! the shape of the mesh is pretty awesome, but the color is weird especially the skin part. Can you help me to improve the performance? _tmp_dingtalkgov_qt_pic_1694157581981

Dragonkingpan commented 1 year ago

@mli0603

I followed deepwin's method and did the whole process all over again, start at the colmap part. I don't know why but I do get a decent mesh! the shape of the mesh is pretty awesome, but the color is weird especially the skin part. Can you help me to improve the performance? _tmp_dingtalkgov_qt_pic_1694157581981 I also tested a custom model and just produced the third result. Mesh looks okay, but the color also clearly feels that the saturation is much higher than the original image. 1694173555993 original image 0018

mli0603 commented 1 year ago

Thanks for the update.

Choco83 commented 1 year ago

2. the bounding sphere is fitting tight around the object and updated the cen

@deeepwin thanks for nice notebook. Can you tell me how are you calculating camera poses for your video?

deeepwin commented 12 months ago

@Choco83 there are several ways. With the mouse-2 example I used Kiri, see NerfStudio description here. In one instance of data, I already had the poses in COLMAP format (images.txt) from the image sensor, see here.

zhj1013 commented 9 months ago

@mli0603 I followed deepwin's method and did the whole process all over again, start at the colmap part. I don't know why but I do get a decent mesh! the shape of the mesh is pretty awesome, but the color is weird especially the skin part. Can you help me to improve the performance? _tmp_dingtalkgov_qt_pic_1694157581981 I also tested a custom model and just produced the third result. Mesh looks okay, but the color also clearly feels that the saturation is much higher than the original image. 1694173555993 original image 0018

Awesome result! Could you please tell me how your video was shot and whether the DATA preprocessing followed the DATA PROCESSING.md document ?