Closed DanielGeorgeMathew closed 5 years ago
Does it mean I have to train everytime A new signature comes into the picture?
It is writer independent, so you don't have to re-train it. We use the same threshold for each writer. I've tested it both on my signatures and some signatures from family members. It does a good job.
There could be many reasons why its not working in your case. You might have more variability in your signatures than me, or it could just be the way the image was taken(could probably be zoomed in too much, too little). Try increasing the threshold(d used in main.py) a bit if you want to cater more sloppy signatures. I would like to see the signatures you've verified on(try signing for a random name and ask someone to forge it, don't upload your own signature on the internet). Maybe you've forged them too well, maybe they are sloppy, i cant know, as I can't reproduce this with my own local signatures.
Also make sure you're using the code from the current master branch.There was a bug present some 3-4 commits back,where I wasn't converting the file object to a PIL image properly. Its been resolved now. Try running a git pull to see if you're up to date with the master branch.
What I'm actually doing is this.. I have an initial signature pair to compare I pass each signature gray-scale image to a convert to image tensor function in your code Then I pass both of the outputs to a forward pass function Then I check distance between the two output vectors to get final distance. Is this the correct way?
Btw, I'm extracting signature images from cheques and then comparing with a signature in my database. Both of the images are from different sources
What I'm actually doing is this.. I have an initial signature pair to compare I pass each signature gray-scale image to a convert to image tensor function in your code Then I pass both of the outputs to a forward pass function Then I check distance between the two output vectors to get final distance. Is this the correct way?
Yes, that is correct, you call invert_image and convert_to_image_tensor, then feed the two input images into the model and see if the distance is less than the threshold for it to be genuine signatures, otherwise forged.
The images don't have to be same dimensions right? Any other restrictions on the signature image?
On Fri, 31 May 2019, 15:07 Aftaab Zia <notifications@github.com wrote:
What I'm actually doing is this.. I have an initial signature pair to compare I pass each signature gray-scale image to a convert to image tensor function in your code Then I pass both of the outputs to a forward pass function Then I check distance between the two output vectors to get final distance. Is this the correct way?
Yes, that is correct, you call invert_image and convert_to_image_tensor, then feed the two input images into the model and see if the distance is less than the threshold for it to be genuine signatures, otherwise forged.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Aftaab99/OfflineSignatureVerification/issues/4?email_source=notifications&email_token=AEDMCN35JG6P7N5CYBESCRDPYEIJJA5CNFSM4HQO2TY2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWVBI7Q#issuecomment-497685630, or mute the thread https://github.com/notifications/unsubscribe-auth/AEDMCN35F5OYXZWKUFFTVXDPYEIJJANCNFSM4HQO2TYQ .
Btw, I'm extracting signature images from cheques and then comparing with a signature in my database. Both of the images are from different sources
That might be the issue. The model was not trained on cheques. Thus even the line above which you sign can throw the model off. You'll have to make sure you extract only the signature, which is very difficult. Try using the show inverted function in Preprocessing.py to view both the preprocessed signatures. If the only the signatures are getting extracted, that might be a issue with the model. Otherwise you'll need to retrain this model to able to ignore lines under the signature and other things. You'll need a dataset of cheque images specifically.
The images don't have to be same dimensions right? Any other restrictions on the signature image? … On Fri, 31 May 2019, 15:07 Aftaab Zia @.*** wrote: What I'm actually doing is this.. I have an initial signature pair to compare I pass each signature gray-scale image to a convert to image tensor function in your code Then I pass both of the outputs to a forward pass function Then I check distance between the two output vectors to get final distance. Is this the correct way? Yes, that is correct, you call invert_image and convert_to_image_tensor, then feed the two input images into the model and see if the distance is less than the threshold for it to be genuine signatures, otherwise forged. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#4?email_source=notifications&email_token=AEDMCN35JG6P7N5CYBESCRDPYEIJJA5CNFSM4HQO2TY2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWVBI7Q#issuecomment-497685630>, or mute the thread https://github.com/notifications/unsubscribe-auth/AEDMCN35F5OYXZWKUFFTVXDPYEIJJANCNFSM4HQO2TYQ .
No they dont have to, we resize them anyway.But make sure you've cropped them so that the signature has 15% margins(basically the majority of the image should be the signature) on all sides. They should look like the images in the dataset.
here are some images of signatures ive compared. [image: ext_sig.jpg] EXTRACTED SIGNATURE WHICH IS FORGED GIVING A DISTANCE OF [0.2230267] again less than threshold of 0.3 [image: forged.jpg]FORGED SIGNATURE but giving a distance of [0.2501182] which is less than threshold. [image: 230995329781824.jpg]ORIGINAL SIGNATURE
[image: data_base.jpg]ORIGINAL SIGNATURE
On Fri, May 31, 2019 at 3:19 PM Aftaab Zia notifications@github.com wrote:
The images don't have to be same dimensions right? Any other restrictions on the signature image? … <#m7521159594313843833> On Fri, 31 May 2019, 15:07 Aftaab Zia @.*** wrote: What I'm actually doing is this.. I have an initial signature pair to compare I pass each signature gray-scale image to a convert to image tensor function in your code Then I pass both of the outputs to a forward pass function Then I check distance between the two output vectors to get final distance. Is this the correct way? Yes, that is correct, you call invert_image and convert_to_image_tensor, then feed the two input images into the model and see if the distance is less than the threshold for it to be genuine signatures, otherwise forged. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#4 https://github.com/Aftaab99/OfflineSignatureVerification/issues/4?email_source=notifications&email_token=AEDMCN35JG6P7N5CYBESCRDPYEIJJA5CNFSM4HQO2TY2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWVBI7Q#issuecomment-497685630>, or mute the thread https://github.com/notifications/unsubscribe-auth/AEDMCN35F5OYXZWKUFFTVXDPYEIJJANCNFSM4HQO2TYQ .
No they dont have to, we resize them anyway.But make sure you've cropped them so that the signature has 15% margins(basically the majority of the image should be the signature) on all sides.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Aftaab99/OfflineSignatureVerification/issues/4?email_source=notifications&email_token=AEDMCN2WPITPDCZCOTBXOBTPYEJWHA5CNFSM4HQO2TY2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWVCBBQ#issuecomment-497688710, or mute the thread https://github.com/notifications/unsubscribe-auth/AEDMCN7WHSSOPPGB7B5KWRDPYEJWHANCNFSM4HQO2TYQ .
I cant view your images, you haven't uploaded them properly. Please use the Github forum for adding images(instead of your email client). You can just drag and drop, but make sure you can see the image under preview.
With that said the threshold I got from Test.py was 0.145139
. I'm not sure how you got a threshold of 0.3. Did you increase it because I told you? I think 0.3 is way too much, try using 0.145139
itself, and if that doesn't work, try 0.17
. Anything above that I would not recommend.
Yeah..I decreased it to 0.2 . Getting better results with some pairs. I guess the cropping of signatures matters. I'm using connected components for cropping signatures. So maybe sometimes the 15 percent rule you said isn't satisfied.
On Sun, 2 Jun 2019, 16:25 Aftaab Zia <notifications@github.com wrote:
I cant view your images, you haven't uploaded them properly. Please use the Github forum for adding images(instead of your email client). You can just drag and drop, but make sure you can see the image under preview. With that said the threshold I got from Test.py was 0.145139. I'm not sure how you got a threshold of 0.3. Did you increase it because I told you? I think 0.3 is way too much, try using 0.145139 itself, and if that doesn't work, try 0.17. Anything above that I would not recommend.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Aftaab99/OfflineSignatureVerification/issues/4?email_source=notifications&email_token=AEDMCNZZMJ2FPFHLOCME3DLPYPC4PA5CNFSM4HQO2TY2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWXVUEY#issuecomment-498031123, or mute the thread https://github.com/notifications/unsubscribe-auth/AEDMCNZLFSRQI6SHW4YPYKTPYPC4PANCNFSM4HQO2TYQ .
The 15% thing wasn't really a metric I computed or anything, just meant that the signature should be large enough with some margin. Its nice that you're getting better results atleast on some pairs. Even the dataset that this model was trained, I was only able to get 78%. I'll try getting a hold of someone's GPU and training the full SigNet model(the authors claim an accuracy of 100% on the CEDAR dataset, questionable but I don't know). Also I'll try your method of cropping using the connected components algorithm for preprocessing and retraining the model.
You can have a look at the official repo for the paper. Although I don't think they maintain it though, and there are probably no pretrained models available.
Ok, thanks for the info and the repository. Cheers
On Mon, 3 Jun 2019, 15:44 Aftaab Zia <notifications@github.com wrote:
The 15% thing wasn't really a metric I computed or anything, just meant that the signature should be large enough with some margin. Its nice that you're getting better results atleast on some pairs. Even the dataset that this model was trained, I was only able to get 78%. I'll try getting a hold of someone's GPU and training the full SigNet model(the authors claim an accuracy of 100% on the CEDAR dataset, questionable but I don't know). Also I'll try your method of cropping using the connected components algorithm for preprocessing and retraining the model.
You can have a look at the official repo for the paper https://github.com/sounakdey/SigNet. Although I don't think they maintain it though, and there are probably no pretrained models available.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Aftaab99/OfflineSignatureVerification/issues/4?email_source=notifications&email_token=AEDMCN5DIYCX77SJD5YOH43PYUGZLA5CNFSM4HQO2TY2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWZIZ4Q#issuecomment-498240754, or mute the thread https://github.com/notifications/unsubscribe-auth/AEDMCN7V6LA52UN7HUIQQPLPYUGZLANCNFSM4HQO2TYQ .
I've tested this network on my signatures and forged versions of those signature. Its not giving good results.