Open Pyn-hust opened 1 year ago
Hi, according to the error message the data_in_obs
is empty. It's either because the mesh loading is failed (open3d does not raise exception for this) or the scale is not aligned (e.g. directly evaluate the mesh from NeRF which is still in bbox of size 1)
Hi, I got the same problem as well. Were you able to solve it @YiningPeng ?
I printed the shapes from data_pcd to data_in_obs:
Data PCD: (42495, 3)
Data Down: (1431, 3)
Data In: (0, 3)
Data OBS: (0, 3)
This step data_in = data_down[inbound]
took out all the points.
@jzhangbs can you help me with this?
Hi @rashikshrestha, seems you are using default setting of COLMAP, which means the camera is different from the ground truth provided by the dataset. You can open your point cloud and ground truth point cloud together in meshlab and see if they are aligned.
Hello, I have also encountered the same problem. I am using the ply file(mesh) generated by NeRF for evaluation. May I ask if there is any solution? Sorry, I did not understand your previous answer
Hi @baekhyun77 , It is basically the problem of difference in the coordinate system.
In my case, the coordinate system provided by colmap was different than the ground truth. I solved it using the same camera poses as ground truth and triangulation to get point cloud.
In your case, I think it is because, the NeRF normalizes the poses to a unit cube, which gives a different coordinate system than the GT given by DTU. Rescaling the coordinates from the normalized one to GT can solve the problem.
@rashikshrestha Thank you for your answer. How can I normalize the coordinate system and then convert it to the GT coordinate system?I'm sorry, I'm not very familiar with this aspect.
Hi, I got the same problem as well. Were you able to solve it @YiningPeng ?
I printed the shapes from data_pcd to data_in_obs:
Data PCD: (42495, 3) Data Down: (1431, 3) Data In: (0, 3) Data OBS: (0, 3)
This step
data_in = data_down[inbound]
took out all the points.What I did:
- Used the images folder of a scene as the images_dir for COLMAP and create sparse reconstruction (in the from of points3D.bin)
- Opened points3D.bin via COLMAP GUI and save the pointcloud as PLY
- Used this PLY file in --data arg of this eval code
@jzhangbs can you help me with this?
I have solved it. This problem is caused by the GT point cloud and the predicted mesh scale not aligned. The baselines I use are Monosdf and NeuS, and all we need to do is focus on whether we multiply the 'scale' parameter.
Hi, may I ask how exactly do you scale it to the GT scale? I tried using the bounding box info provided in the DTU dataset but visualizing the point cloud doesn't seem to tell me they are aligned 100%.
Hi, may I ask how exactly do you scale it to the GT scale? I tried using the bounding box info provided in the DTU dataset but visualizing the point cloud doesn't seem to tell me they are aligned 100%.
@iszihan Hi, have you solved this problem?
@YiningPeng Hi, Could you please show more details for multiplying the 'scale' parameter.
@iszihan @ThePassedWind I refer to the test code of monosdf for scale transformation.
@iszihan @ThePassedWind I refer to the test code of monosdf for scale transformation.
@YiningPeng Thanks for your reply! Plus, I want to ask whether you faced this situation? Killed when downsampling pcd.
@iszihan @ThePassedWind I refer to the test code of monosdf for scale transformation.
@YiningPeng Thanks for your reply! Plus, I want to ask whether you faced this situation? Killed when downsampling pcd.
I haven't had that happening to me. I've only had cases where the memory explodes or the output is nan. Personally, I recommend using the colmap to see if the GT mesh is aligned with the predicted mesh after the scale transformation before running this code. I've only used this code when testing monosdf, and when testing other baselines, I only use their own test code because I can't distinguish between mesh scales. I'm sorry, maybe it can't help you.
@iszihan @ThePassedWind I refer to the test code of monosdf for scale transformation.
@YiningPeng Thanks for your reply! Plus, I want to ask whether you faced this situation? Killed when downsampling pcd.
I haven't had that happening to me. I've only had cases where the memory explodes or the output is nan. Personally, I recommend using the colmap to see if the GT mesh is aligned with the predicted mesh after the scale transformation before running this code. I've only used this code when testing monosdf, and when testing other baselines, I only use their own test code because I can't distinguish between mesh scales. I'm sorry, maybe it can't help you.
Okay, I'm testing neuralangelo. The rep of neuralangelo is so vague for evaluating DTU. Thanks for your reply again~
Great code! But when I generate mesh, I get the following error. I found that when the mesh has holes, a memory explosion occurs or an error is reported as follows. But I'm not sure if this error is caused by an empty mesh. Is there a way for you to solve this problem?
compute data2stl: 67%|###############################################################################3 | 6/9 [00:02<00:01, 1.68it/s]Traceback (most recent call last): File "eval.py", line 193, in
accuracy, completeness, overall = run(args, mesh_path)
File "eval.py", line 113, in run
dist_d2s, idx_d2s = nn_engine.kneighbors(data_in_obs, n_neighbors=1, return_distance=True)
File "/mnt/A/hust_pyn/anaconda3/envs/meshtest/lib/python3.6/site-packages/sklearn/neighbors/_base.py", line 670, in kneighbors
X = check_array(X, accept_sparse='csr')
File "/mnt/A/hust_pyn/anaconda3/envs/meshtest/lib/python3.6/site-packages/sklearn/utils/validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "/mnt/A/hust_pyn/anaconda3/envs/meshtest/lib/python3.6/site-packages/sklearn/utils/validation.py", line 729, in check_array
context))
ValueError: Found array with 0 sample(s) (shape=(0, 3)) while a minimum of 1 is required.
compute data2stl: 67%|###############################################################################3 | 6/9 [00:19<00:09, 3.21s/it]