Closed appleleaves closed 6 years ago
About the Table 2, I would like to ask a few questions.
OriNet actually is not interesting to me by itself, because it has results similar to Yi et.al, 2015. You can seen from bottom part of Table 1 that all the variants of OriNet perform essentially the same.
And it is used for Table 2, "separately" column. Separately means that I have trained OriNet with https://github.com/ducha-aiki/affnet/blob/master/train_OriNet_test_on_graffity.py and AffNet with https://github.com/ducha-aiki/affnet/blob/master/train_AffNet_test_on_graffity.py
And combined them only in full pipeline (detection - affine - ori - desc) in test time.
In the setting (2), why don't you compare the biases init 0?
Because it failed in (1), so I thought that it is not worth it. But probably you are right and I should have try to train it anyway
But the parameter in setting (1,2) include ori information. How can you train "A" and keep the ori unchanged?
By applying https://github.com/ducha-aiki/affnet/blob/master/LAF.py#L279 transformation. It cancels the rotation
Why did not you talk about OriNet in the paper. How did you use it in the Table 1 and 2?