Closed fabricecarles closed 7 months ago
Thank you for sharing your results. Here are some suggestions at first glance.
We only compare with TopicFM, as TopicFM+ employs a significantly higher number of OpenCV RANSAC iterations (10k vs. the standard 1k in other baselines) in their code, which greatly improves AUC but also substantially slows down RANSAC. Evaluating inference speed without considering accuracy isn't meaningful.
Megadepth | AUC@(5,10,20) |
---|---|
LoFTR | 52.8 / 69.2 / 81.2 |
TopicFM | 54.1 / 70.1 / 81.6 |
TopicFM+ | 52.2 / 68.8 / 81.1 |
Ours | 56.4 / 72.2 / 83.5 |
Ours (Opt.) | 55.4 / 71.4 / 82.9 |
TopicFM+(10k) | 58.2 / 72.8 / 83.2 |
Ours(10k) | 59.3 / 74.1 / 84.6 |
We will provide a jupyter notebook demo to show how to use our model later, please stay tuned!
thanks for your advice with self.matcher = reparameter(self.matcher) inference time is improved a little bit
in your readme you plan to Add options of flash-attention and torch.compiler for better performance is there other performance improvements expected ?
Sorry for the late reply. Yes, there's also FP16 inference. We have already modified some of the code and added a Jupyter notebook to demonstrate how to use FP16 inference (on modern GPUs) to accelerate our model. This will provide even faster speeds than mixed precision with almost no loss in accuracy.
Hi, thanks for open-sourcing the code and model weights As i said in a previous post I would like to use EfficientLoFTR to do a comparative benchmark in our study.
I found strange results in my benchmark since in size 1x1x256x256 Efficient LoFTR inference time is close to 26 ms This is better than LoFTR witch run at ~40 ms for this resolution but very close to topicFMfast when measured on my GeForce RTX 2070 Mobile GPU
Since topicFMfast is not in your benchmark I would like to know if I do a mistake when using your code.
here is my inference code :
here is my environment setup :
Did I miss something to make your code more efficient ?
Bests