Closed wangxinqun closed 1 year ago
Hi,
But it seems that the neighbors' list of every vertex v is the same, so what is the meaning of ranking the list?
Hi, it is for the predicted neighbors.
Hi, I design the following example to explain how I understand the process, if I am wrong, please tell me. Suppose that we have two lane centers, the groundtruth of the lane graph is: 0 1 0 0
Meanwhile, suppose that the matched predicted lane graph is : 0.10 0.80 0.15 0.01
Accoding to your paper, "positive edges are those whose confidence is greater than 0.5". So the one_hot predicted lane graph is 0 1 0 0
For the first vertex 0, the predicted neighbors list is [0, 1] (The values of the edges are [0, 1]), and the ordered predicted neighbors list is [1, 0], because the confidence 0.80 > 0.10.
The example looks correct. For the ordered predicted neighbors [1, 0], the order of the GT also changes to [1, 0]. I think the 7th slide here would explain why we need to sort the neighbors according to their confidence.
Thanks a lot, the slide you shared helped a lot. Actually, according to your slide, my example is wrong, and the correct one should be as the following.
Suppose that we have two lane centers, the groundtruth of the lane graph is: 0 1 0 0
Meanwhile, suppose that the matched predicted lane graph is : 0.10 0.80 0.15 0.01
Accoding to your paper, "positive edges are those whose confidence is greater than 0.5". So the one_hot predicted lane graph is 0 1 0 0
For the first vertex 0, the predicted neighbors list is [1]
But I still have one more question.
Suppose that the ground truth of a vertex's neighbor list is [1 3] And the matched, ordered, predicted list is [0 3].
According to your slide, the 𝐴𝑣𝑒𝑃 should be ( 0x1 + 1x0.5 ) x 0.5
My question is that, why do not we calculate it as: ( 0x1 + 1x1 ) x 0.5 ?
The difference is that, why the precision of the second vertex is 0.5, instead of 1? Why must we consider the failure of the prediction of the first vertex when we calculate the precision of the second vertex? I believe that this results that we have to consider the order of the predicted list.
Thank you.
If we do not consider precision, the model can simply predict all nodes as the neighbor, and achieve a perfect score, 1. Such that the predicted results is [0, 1, 3], and the score would be ( 0x1 + 1x1 + 1x1 ) x 0.5 = 1
A nice answer! Thank you for your patience.
If we do not consider precision, the model can simply predict all nodes as the neighbor, and achieve a perfect score, 1. Such that the predicted results is [0, 1, 3], and the score would be ( 0x1 + 1x1 + 1x1 ) x 0.5 = 1
I have a question. If I predict that the probability of connecting [0,1,3] is [0.9,0.9,0.9], the probability calculated by the indicator is quite large, but the node 0 is the wrong connecting edge. There is nothing Punishment?
Hi, I dont quite get your question. Could you provide a more detailed description?
Hi, thanks for the remarkable work! I am confused about the TOP calculation.