Open elegy12138 opened 7 months ago
Cloud you provide some test datsets?
I generated some random data and then copied the entry-level 2.obj twice to use as both prediction and GT (Ground Truth). Logically, the WED (Weighted Error Distance) should be 0, but the results did not meet the expectations as shown in the diagram.
After conducting my tests, I've identified that the issue arises from the handling of specific scenarios within the following code.
edge_mask = edge_distance[predict_indices, label_indices] <= 0.1
pr_corners = pred_edges_vertices[predict_indices[edge_mask]]
gt_corners = label_edges_vertices[label_indices[edge_mask]]
# Calculate independent corners that aren't used in the edges
un_match_pr_corners = remove_corners(predicted_corners, np.unique(pr_corners.reshape(-1, 3), axis=0))
un_match_gt_corners = remove_corners(label_corners, np.unique(gt_corners.reshape(-1, 3), axis=0))
distance_matrix = cdist(un_match_pr_corners, un_match_gt_corners)
predict_indices, label_indices = linear_sum_assignment(distance_matrix)
mask = distance_matrix[predict_indices, label_indices] <= 0.1
distances = np.sum(distance_matrix[predict_indices[mask], label_indices[mask]])
# Calculate the positive corner offsets
pr_vertices = np.unique(pr_corners.reshape(-1, 3), axis=0)
gt_vertices = np.unique(gt_corners.reshape(-1, 3), axis=0)
distance_matrix = cdist(pr_vertices, gt_vertices)
min_distance = np.min(distance_matrix, axis=1)
distances += np.sum(min_distance)
# wireframe edit distance
for k, indices in enumerate(predict_indices[edge_mask]):
pred_edges_vertices[indices] = label_edges_vertices[label_indices[edge_mask][k]]
predicted_corners = label_edges_vertices.reshape(-1, 3)
predicted_corners = np.unique(predicted_corners, axis=0)
submission_edges = computer_edges(label_edges_vertices, predicted_corners) # get the edge index
wed = graph_edit_distance(predicted_corners, submission_edges.copy(), label_corners.copy(),
label_edges.copy(), distances)
When calculating un_match_pr_corners and un_match_gt_corners, there are scenarios where either or both may return an empty list. This can occur in several situations: Firstly, if the Ground Truth (GT) points and the predicted points are a perfect match, then both un_match_pr_corners and un_match_gt_corners will be empty, as there are no unmatched points. Secondly, if during the matching process, all GT points have found corresponding predicted points, but there are still some predicted points that remain unmatched, then un_match_gt_corners will be empty. Lastly, if all predicted points are matched with GT points, but there are still some GT points left unmatched, then un_match_pr_corners will be empty.
Secondly, edge_mask serves as a boolean control for edge matching, and I'm puzzled as to why it's being computed in conjunction with the results of matching un_match_pr_corners and un_match_gt_corners. It's clear that the dimensions of the matching outcomes may not necessarily align with those of the edge_mask. For instance, the calculation of the edge_mask might yield [True, True, True, True, True, True, True, True], while the calculations for un_match_pr_corners and un_match_gt_corners could result in two empty lists. This would lead to the following situation:
distance_matrix = cdist(un_match_pr_corners, un_match_gt_corners)
predict_indices, label_indices = linear_sum_assignment(distance_matrix)
As a result, both predict_indices and label_indices could end up being empty, leading to a mismatch between the 0 dimensions of predict_indices and the 8 dimensions of the mask. This is just one scenario, and I find it quite perplexing.
Thank you, I will save this issue today
Thank you, I will save this issue today
Thank you very much for your assistance once again.
I've modified the code and added comments
Hi, I am also a bit confused about the results of your evaluation code.
I take your example from here https://github.com/geospatial-lab/Building3D/blob/Building3D/eval/ap_calculator.py.ipynb
Now I make a slightly modified 2.obj, lets call it reconstruction.obj
v 535271.8199996186 6580870.999999885 44.42389989852905
v 535252.1199998093 6580879.400000458 44.40930009841919
v 535243.3900000007 6580871.320000057 43.34209991455078
v 535243.6900000108 6580879.699999695 43.3276999092102
v 535252.2799996567 6580883.750000839 46.978299865722654
v 535262.2600001526 6580883.38 48.243500003814695
v 535261.6299990845 6580866.599999999 48.254900226593016
l 1 5
l 2 4
l 4 3
l 5 6
l 6 7
l 7 2
l 1 3
l 2 3
l 5 2
The wireframe for reconstruction.obj looks like this
The wireframe for your example 2.obj looks like this
Yet, the WED = 0:
Wireframe Edit distance 0.0
Average Corner offset 0.0
Corners Precision: 0.8571428571428571
Corners Recall: 0.75
Corners F1: 0.7999999999999999
Edges Precision: 0.4444444444444444
Edges Recall: 0.5
Edges F1: 0.47058823529411764
Is that correct? What is the exact definition of the 'Wireframe Edit distance'? I cannot find it in your paper.
I will check again as soon as possible, because some urgent matters have kept me occupied. For the WED, you can find it in building3D challenge https://usm3d.github.io/.
Thank you very much for your excellent work. However, I encountered some problems while using the eval code for evaluation. It seems not to be perfect. For example, in the following code snippet, there is no initialization for wed. Additionally, when I input data according to the inputs of compute_metrics, it doesn't calculate as expected and produces bizarre results. Moreover, I noticed that the input batch size is not utilized in the code; instead, it is obtained through other means. Furthermore, when I use randomly generated data for WED calculation, the results are always the same, even if I use two identical sets of data for prediction and ground truth (GT). This makes me suspect that this is not the final version of the code. Could you please upload the latest version of the code or address these issues?