Open binhmuc opened 5 years ago
Also your paper said that "A keypoint is considered to be matched correctly if its predicted location is within a distance of α · max(h, w) of the target keypoint position". So i don't know why you code compare with source points instead of target points.
Hi, this is just a matter of terminology. For consistency with the ProposalFlow paper that uses inverse warping we also transform the points from the second image (target) to the first image (source). In the paper we explain it in the opposite way because it's more natural. But remember "source" and "target" are just two names. Sorry for the confusion.
Le lun. 1 juil. 2019 à 19:43, binhmuc notifications@github.com a écrit :
Also your paper said that "A keypoint is considered to be matched correctly if its predicted location is within a distance of α · max(h, w) of the target keypoint position". So i don't know why you code compare with source points instead of target points.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/ignacio-rocco/cnngeometric_pytorch/issues/11?email_source=notifications&email_token=AC3LX3BISMD5OWRW7TMVAUDP5I63PA5CNFSM4H4UNVZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODY63FBQ#issuecomment-507359878, or mute the thread https://github.com/notifications/unsubscribe-auth/AC3LX3CRLFI7UJ2GYJO3HFTP5I63PANCNFSM4H4UNVZQ .
Thank for you reply :) So, it means that, i just replace "source points" and "target point" in the code and got the natural result ?. But it too weird for me... Because in the code: You warped source images -> target images, and using theta result to get inverse warping... So, could you tell me, how to get target point from source points and theta result ? Thank you !
Please see the explanations about inverse warping here:
https://www.cs.unc.edu/~lazebnik/research/fall08/lec08_faces.pdf
this should help you understand!
so do you understand his means? I'm also confused this opinions.
@lixiaolusunshine yes, i understood him. Clearly that, the paper said that source points to target points, but in the source code is totally inverse.
so in his paper he got the estimated inverse affine parameters from the featuregression layer, then use this inverse mapping to warp the source image into the target image?
@lixiaolusunshine sorry, i cannot catch up your mean. In his paper, very clear that, use GMM, find a list of parameters, from parameters => warp => loss. The only difference is when he compare the result. He compare the target points, but in code, we never get target points for the parameters, instead of is source points.
@binhmuc Thanks for your issue.
Link about inverse points method is broken. If you know how to do the inverse points, I would like to know. The owner of this source code does not appear to be replying at this time.
Please see the explanations about inverse warping here:
https://www.cs.unc.edu/~lazebnik/research/fall08/lec08_faces.pdf
this should help you understand!
Thank for your source code ! Could you tell me how to get target point from pretrained model and the source point. I looking for "eval_pf.py" but look like you get source point from target point...