DagnyT / hardnet

Hardnet descriptor model - "Working hard to know your neighbor's margins: Local descriptor learning loss"
MIT License
514 stars 100 forks source link

Has a worse performance HardNetClassicalHardNegMiningSiftInit, Compared to HardNetMultipleDatasets? #27

Closed yunyundong closed 6 years ago

yunyundong commented 6 years ago

When I run HardNetClassicalHardNegMiningSiftInit, it performance worse heavily than HardNetMultipleDatasets. In theory, the performance of HardNetClassicalHardNegMiningSiftInit should not be worse than HardNetMultipleDatasets. I note there are some difference in convoloution layer, bias=False for HardNetMultipleDatasets, but True for HardNetClassicalHardNegMiningSiftInit. Another difference is that the learning rate, they have 100 times difference. Why HardNetClassicalHardNegMiningSiftInit have a worse performance than HardNetMultipleDatasets? @DagnyT

yunyundong commented 6 years ago

notredame Test Epoch: 3 [81920/100000 (82%)]: : 49it [00:10, 9.61it/s] Test set: Accuracy(FPR95): 0.14348000

yosemite Test Epoch: 3 [81920/100000 (82%)]: : 49it [00:08, 6.05it/s] Test set: Accuracy(FPR95): 0.26712000

ducha-aiki commented 6 years ago

@yunyundong First, HardNetMultipleDatasets is trained on multiple datasets, while HardNetClassicalHardNegMiningSiftInit is trained on one. You should compare HardNetClassicalHardNegMiningSiftInit vs HardNet.py Second, this was specifically adressed in paper, https://arxiv.org/pdf/1705.10872.pdf page 7 image

yunyundong commented 6 years ago

For HardNetMultipleDatasets, I can get about FPR95 of 0.1% , is it normal? I suspect the test data is included in the training data.

ducha-aiki commented 6 years ago

Another difference is that the learning rate, they have 100 times difference

Because we update hyperparameters (and the rest things like to newest pytorch version, etc.) only for those scripts, which are actually used. It looks like that you systematically got questions on obsolete scripts. Things, which are up-to-date are in https://github.com/DagnyT/hardnet/blob/master/code/run_me.sh

For HardNetMultipleDatasets, I can get about FPR95 of 0.1% , is it normal? I suspect the test data is included in the training data.

Yes, it is normal and yes, "test data are in training data" here. That is because HardNetMultipleDatasets is not supposed to be evaluated on Brown dataset, but instead - on HPatches. showing "FPR" here is just sanity check, e.g. that they are not suddenly NaNs

yunyundong commented 6 years ago

When I down load the file (https://github.com/DagnyT/hardnet/blob/master/code/run_me.sh) in my ubuntu, and then chmod +x run_me.sh, run the run_me.sh, an error occurs:

./run_me.sh: line 7: syntax error near unexpected token newline' ./run_me.sh: line 7:<!DOCTYPE html>'

ducha-aiki commented 6 years ago

@yunyundong run_me.sh should be run from the repo root dir. I have fixed README.md from ./run_me.sh to ./code/run_me.sh

Thanks for catch!