Closed zy871125746 closed 5 years ago
Hi there —
The non match sampling that we are doing is the same as what we understood is described in Schmidt et al 2017, which is to sample non matches in image B for the pixels in image A which have a match in image B.
One thing we investigated was whether there was an issue with this since opposite sides of the object wouldn’t end up sampled as being non matches. If you think about it, if two views are looking at opposite views of the object, then there won’t be matches for many pixels, and so those pixels won’t end up having any non matches sampled.
The “blind non matches” are accordingly non match samples in image B for even the pixels in image A for which there isn’t a match.
After experimentation though we didn’t observe any significant differences in quantitative metrics when using blind non match sampling. And yes this isn’t mentioned in the paper — it’s a pretty minor note compared to other experiments mentioned.
Does that make sense?
On Sat, Jan 5, 2019 at 1:08 AM HugeKangaroo notifications@github.com wrote:
Could you explain what is the blind_non_match_loss ? I can not find this loss in the paper
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/RobotLocomotion/pytorch-dense-correspondence/issues/183, or mute the thread https://github.com/notifications/unsubscribe-auth/AFYQqEBj6upqJxxu9E0LoYYXBmEWc4-Dks5vAEFjgaJpZM4Zvfab .
Sorry for answering later. After your explantion, I figured out how the "blind non matches" works. Thanks a lot !
:+1:
Could you explain what is the blind_non_match_loss ? I can not find this loss in the paper