ignacio-rocco / cnngeometric_pytorch

CNNGeometric PyTorch implementation
MIT License
275 stars 80 forks source link

How to get predicted target point from result and source point #11

Open binhmuc opened 5 years ago

binhmuc commented 5 years ago

Thank for your source code ! Could you tell me how to get target point from pretrained model and the source point. I looking for "eval_pf.py" but look like you get source point from target point...

binhmuc commented 5 years ago

Also your paper said that "A keypoint is considered to be matched correctly if its predicted location is within a distance of α · max(h, w) of the target keypoint position". So i don't know why you code compare with source points instead of target points.

ignacio-rocco commented 5 years ago

Hi, this is just a matter of terminology. For consistency with the ProposalFlow paper that uses inverse warping we also transform the points from the second image (target) to the first image (source). In the paper we explain it in the opposite way because it's more natural. But remember "source" and "target" are just two names. Sorry for the confusion.

Le lun. 1 juil. 2019 à 19:43, binhmuc notifications@github.com a écrit :

Also your paper said that "A keypoint is considered to be matched correctly if its predicted location is within a distance of α · max(h, w) of the target keypoint position". So i don't know why you code compare with source points instead of target points.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/ignacio-rocco/cnngeometric_pytorch/issues/11?email_source=notifications&email_token=AC3LX3BISMD5OWRW7TMVAUDP5I63PA5CNFSM4H4UNVZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODY63FBQ#issuecomment-507359878, or mute the thread https://github.com/notifications/unsubscribe-auth/AC3LX3CRLFI7UJ2GYJO3HFTP5I63PANCNFSM4H4UNVZQ .

binhmuc commented 5 years ago

Thank for you reply :) So, it means that, i just replace "source points" and "target point" in the code and got the natural result ?. But it too weird for me... Because in the code: You warped source images -> target images, and using theta result to get inverse warping... So, could you tell me, how to get target point from source points and theta result ? Thank you !

ignacio-rocco commented 5 years ago

Please see the explanations about inverse warping here:

https://www.cs.unc.edu/~lazebnik/research/fall08/lec08_faces.pdf

this should help you understand!

lixiaolusunshine commented 5 years ago

so do you understand his means? I'm also confused this opinions.

binhmuc commented 5 years ago

@lixiaolusunshine yes, i understood him. Clearly that, the paper said that source points to target points, but in the source code is totally inverse.

lixiaolusunshine commented 5 years ago

so in his paper he got the estimated inverse affine parameters from the featuregression layer, then use this inverse mapping to warp the source image into the target image?

binhmuc commented 5 years ago

@lixiaolusunshine sorry, i cannot catch up your mean. In his paper, very clear that, use GMM, find a list of parameters, from parameters => warp => loss. The only difference is when he compare the result. He compare the target points, but in code, we never get target points for the parameters, instead of is source points.

tkrtr commented 9 months ago

@binhmuc Thanks for your issue.

Link about inverse points method is broken. If you know how to do the inverse points, I would like to know. The owner of this source code does not appear to be replying at this time.

Please see the explanations about inverse warping here:

https://www.cs.unc.edu/~lazebnik/research/fall08/lec08_faces.pdf

this should help you understand!