Open guyrose3 opened 7 years ago
Hi @guyrose3 , The evaluation protocol of the Brown dataset can be find in a lot of papers, the ROC curve is obtained by varying the distance threshold, you can generate the threshold densely to avoid interpolation. The patch size is 32 as reported in the paper.
hi @yuruntian, I think the difference is the way tensorflow implement their batch normalization layer, comparing to caffe. Also, what are the results you are getting on Hpatches benchmark tasks such as verification and image matching when using their train/val split?
Hi @guyrose3 Here is the early results on Hpatches http://www.iis.ee.ic.ac.uk/ComputerVision/DescrWorkshop/index.html
Hello, I've been researching L2-NET recently and I've encountered related problems. Did you solve the problem? Can you send me a copy of your tensorflow code? If you can, will be greatly appreciated @guyrose3
Hi @yuruntian, I read your paper and found it very interseting. I'm trying to reproduce your results using tensorflow. specifically,I'm trying to take the model trained on HPatches(with augmentation) and test it on Brown dataset. I ported the weights from matconvnet into tensorflow,and followed the exact architecture. The tensorflow descriptor works quite well for feature matching tasks, so I'm guessing I plugged in the weights correctly. I also followed the Brown evaluation method and report FPN @ recall=0.95 in this case, however, I'm getting quite different results: 20% FPN @ recall=0.95 on liberty(vs. 3.2% you reported in the paper) So I guess I must be doing something bad in the evaluation code. Can you share or point me to the evaluation code you were using? Also, can you elaborate more on how you measured yourself on brown dataset (patch size, special tweaks you had to do, etc..)?
Thanks, Guy