nv-tlabs / NKSR

[CVPR 2023 Highlight] Neural Kernel Surface Reconstruction
https://research.nvidia.com/labs/toronto-ai/NKSR
Other
747 stars 43 forks source link

How to recurrent the effect in paper? #2

Closed qpc001 closed 1 year ago

qpc001 commented 1 year ago

I am trying to use this method for carla HD maps(Town01), but I got the result is:

image

The result is contained lots of cracks.

I used the script recons_waymo.py to generate it.

import nksr
import torch

from pycg import vis, exp
from pathlib import Path
import numpy as np
from common import load_waymo_example, warning_on_low_memory

if __name__ == '__main__':
    warning_on_low_memory(20000.0)
    xyz_np, sensor_np = load_waymo_example()

    device = torch.device("cuda:0")
    reconstructor = nksr.Reconstructor(device)
    reconstructor.chunk_tmp_device = torch.device("cpu")

    input_xyz = torch.from_numpy(xyz_np).float().to(device)
    input_sensor = torch.from_numpy(sensor_np).float().to(device)

    field = reconstructor.reconstruct(
        input_xyz, sensor=input_sensor, detail_level=None,
        # Minor configs for better efficiency (not necessary)
        voxel_size=0.1,
        approx_kernel_grad=True, solver_tol=1e-4, fused_mode=True, 
        # Chunked reconstruction (if OOM)
        # chunk_size=51.2,
        preprocess_fn=nksr.get_estimate_normal_preprocess_fn(64, 200.0)
    )
    mesh = field.extract_dual_mesh(mise_iter=1)
    mesh = vis.mesh(mesh.v, mesh.f)

    vis.show_3d([mesh], [vis.pointcloud(xyz_np)])
heiwang1997 commented 1 year ago

Hi, thanks for your interest in our paper! I guess the main reason is the wrongly estimated normal originated from the wrong sensor positions. Can you please try to visualize your sensor positions?

Alternatively, you can download our official CARLA dataset here and see if the problem persists.

qpc001 commented 1 year ago

Hi, thanks for your interest in our paper! I guess the main reason is the wrongly estimated normal originated from the wrong sensor positions. Can you please try to visualize your sensor positions?

Alternatively, you can download our official CARLA dataset here and see if the problem persists.

I use the sensor position at [0,0,0] for recons_waymo.py.

And I try to use the script recons_simple.py , but got similar result. (The normal is calculated by CloudCompare.)

heiwang1997 commented 1 year ago

Ah, I see the reason :)

Sensor position refers to the position of the sensor that captures this point, and it could be different for each point. Usually, you could use the positions of your vehicle to approximate such positions, instead of [0, 0, 0].

The normals computed from cloud-compare suffer from similar problems in that your normal orientations are not consistent, i.e., some normals on the road are pointing up, while others are pointing down.

Hence, two solutions:

  1. When you generate your CARLA dataset, record the sensor position for each point, don't use [0,0,0] :)
  2. Use our provided dataset, we did all that for you.

Best.

qpc001 commented 1 year ago

Thanks A lot.

rockywind commented 1 year ago

How to save the mesh as color image like below? image I save the mesh and show it in MeshLab. It looks below. image