Open EATMustard opened 1 year ago
We figured out later that we could simplify the implementation by training the confidence classifier at the same time as the rest of the model but detaching its inputs. This only adds a marginal computational overhead and the end results are strictly identical.
Hello, thanks for your work! In the LightGlue paper, it is mentioned, 'we first train it to predict correspondences and only after train the confidence classifier. The latter thus does not impact the accuracy at the final layer or the convergence of the training.' However, in the actual code, it seems that they are trained together, and there is no distinction between the two stages.