Open NiuKeke opened 5 years ago
@NiuKeke Hello, friend, could you share your code which is modified from the original code? I think the features extracted from the smallest octave is not correct.Thanks.
of course ,i am a pleasure to share it with everybody. And it did work for me, but i do not know whether the code has bugs or not. And how can i give it to you ?
@NiuKeke Can you send me an e-mail. And I would make a test on my dataset and give you the feedback. My e-mail: perrywu1989@163.com . Thanks.
Ok,i will send it. wait a moment.
@NiuKeke could you send me your code which is modified from the original code。My email:756589738@qq.com. Thanks
The reason is to make it as scale-invariant as possible. From the theory, the gradients for the descriptor should be computed at a scale that corresponds to the scale at which the feature is detected. It doesn't have to be the very same scale, but a scale related by some constant factor. I don't do that exactly. I use the first scale of the same octave within which the feature is detected. The first reason is speed and total memory consumption. The second reason is that if you use the scale at which the feature is detected, the image is unnecessarily blurred. I don't mind anyone testing different alternatives and could add a switch in the code to control that.
@NiuKeke Are you discussing the ExtractSiftOctave function which currently using constant scale to extract features? Could you please share your code by commit it to github or make a pull request? And my email is fluke.l@gmail.com , could you share the code to me? Thank you.
@NiuKeke Are you discussing the ExtractSiftOctave function which currently using constant scale to extract features? Could you please share your code by commit it to github or make a pull request? And my email is fluke.l@gmail.com , could you share the code to me? Thank you.
no,the original code extract the features from the octave of the smallest resolution instead of the octave of original resolution.And i do not understand why . I want to know how he think about it. And i will send the code to your email.
@NiuKeke Are you discussing the ExtractSiftOctave function which currently using constant scale to extract features? Could you please share your code by commit it to github or make a pull request? And my email is fluke.l@gmail.com , could you share the code to me? Thank you.
no,the original code extract the features from the octave of the smallest resolution instead of the octave of original resolution.And i do not understand why . I want to know how he think about it. And i will send the code to your email.
Thank you very much. And I'am reading this discussion: https://github.com/Celebrandil/CudaSift/issues/52
The reason is to make it as scale-invariant as possible. From the theory, the gradients for the descriptor should be computed at a scale that corresponds to the scale at which the feature is detected. It doesn't have to be the very same scale, but a scale related by some constant factor. I don't do that exactly. I use the first scale of the same octave within which the feature is detected. The first reason is speed and total memory consumption. The second reason is that if you use the scale at which the feature is detected, the image is unnecessarily blurred. I don't mind anyone testing different alternatives and could add a switch in the code to control that.
ok, maybe i have understood what you have said,but i could make it too by extracting the features from the octave of the original resolution,or i have misunderstood what you have said.
To me, it doesn't seem like a good idea to use the original resolution for the descriptor. Large features, detected at a coarse scale, will then contain high-frequency components that will not be there for the same feature in another image, if it's detected at a finer scale, due to an increasing distance. If you don't have large scale variations though, you might benefit from these high-frequency components, making the images crisper and descriptors more discriminative.
To me, it doesn't seem like a good idea to use the original resolution for the descriptor. Large features, detected at a coarse scale, will then contain high-frequency components that will not be there for the same feature in another image, if it's detected at a finer scale, due to an increasing distance. If you don't have large scale variations though, you might benefit from these high-frequency components, making the images crisper and descriptors more discriminative.
yeah,i agree with what you have said.But sorry,i do not understand if it is right that we got the final features which are extracted from the octave of all resolution without original resolution.
@NiuKeke @heyfluke could you send me your code which is modified from the original code。My email:himmat2k2013@gmail.com. Thanks
Hello man, why did you extract the features from the octave of the smallest resolution instead of original resolution? I tried to change it to extract the features from the original resolution.It did work.How did you think about it?