Closed 3073 closed 4 years ago
It is not when you are training, i used CASIA-WebFace dataset for traing. But this model is different than regular classification models. When you give a face image to model, model gives 512 numbers to you as output, which means 512 features. And in testing base, i give a image of let's say Elon Musk, model gives 512 features and i save them to my database. And when i give second image of Elon Musk to model, the features will be really close since it is the same person. This is what single shot means.
Thank you
On Sat, Jan 18, 2020 at 12:31 PM Burak notifications@github.com wrote:
It is not when you are training, i used CASIA-WebFace dataset for traing. But this model is different than regular classification models. When you give a face image to model, model gives 512 numbers to you as output, which means 512 features. And in testing base, i give a image of let's say Elon Musk, model gives 512 features and i save them to my database. And when i give second image of Elon Musk to model, the features will be really close since it is the same person. This is what single shot means.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/aangfanboy/TripletLossFace/issues/2?email_source=notifications&email_token=AIC6A5LLW2FP7BTKPM2GNK3Q6LD6DA5CNFSM4KISAVS2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJJUJZY#issuecomment-575882471, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIC6A5JWJGTWTOFYMRLPIPTQ6LD6DANCNFSM4KISAVSQ .
-- tttttggggggttttttg
You're welcome
How that is possible when you say single image per person is sufficient to train the algorithm??