I just ran the code and it seems that the evaluation results of the code is in different metric that the Table 4 in ECCV 2022 paper. Could you give insight on how to evaluate like Table 4 in the ECCV 2022 paper?
Could you please provide more hints about which specific metric is quite different? I remember that the evaluation script in gSDF repo is more accurate. It also provides the implementation of alignsdf, and you could have a try on that one.
I just ran the code and it seems that the evaluation results of the code is in different metric that the Table 4 in ECCV 2022 paper. Could you give insight on how to evaluate like Table 4 in the ECCV 2022 paper?