Closed ziruiw-dev closed 4 years ago
Hi @paulguerrero,
I think I understand it now. It trains with the rectified error but evaluates without rectification right?
Best, Zirui
Hi Zirui, We do use the rectification for evaluation as well. Its right that 1.0 means the predicted curvature is off by an amount equal to the ground truth curvature magnitude. Curvature prediction is inherently much less stable than normal prediction, so we do expect larger errors here. That does not mean the curvature is off by that amount everywhere. Some regions in spiky high-curvature shapes add disproportionally to the error, like on the star in the second row of Figure 10.
Hi @paulguerrero, Many thanks for this explanation! Best, Zirui
Hi @paulguerrero,
Thanks for releasing this work.
I have a question regards the rectified curvature loss introduced in your paper:
If I understand correctly, that means it's very difficult to get a loss value larger than 1.0, unless the estimated curvature is so bad that the error between it and ground truth is even larger than the ground truth magnitude. For example, estimated k = 11 when ground truth k = 5 so the rectified error in this example is (11 - 5) / 5 = 1.2
If my understanding above is correct, I am confused about how the algorithms (pcpnet and jet) get average mean square rectified error larger than 1.0 for the first principle curvature (Figure 7 in the pcpnet paper)? I guess I must misunderstand something because both jet and pcpnet get this value larger than 1.0 according to Figure 7.
Best, Zirui