Open MuruganR96 opened 4 years ago
@MuruganR96 if the data distribution changes too much, these models will probably not work very well, so you would be better off collecting data and re-training / finetuning the models. If you have a few samples per user, you can try the "meta-learning" approach, as it does some fine-tuning for the users.
The datasets used to train these models were obtained in an "lab" fashion (you can read about it in the papers that introduced them), but involved people writing signatures on a clean piece of paper (no background), and usually in a single session (less variability). Unfortunately we do not have datasets with "real-world" conditions available for research in the academy.
@luizgh, @atinesh-s sir, I guess right now we don't have re-training / finetuning option in sigver.
i will do training from scratch with more datasets sir.
I have a doubt sir, please correct me.
10000 users GPDSsyntheticOffLineSignature CORPUS,
4000 users GPDSsyntheticOnOffLineSignature CORPUS, and
75 users MCYT-75 OFFLINE SIGNATURE CORPUS
please tell me how many users can i take to train @luizgh, @atinesh-s sir?
I think you can try euclidean distance
I'm also getting bad results, even using the example images and example.py code. Tried using the proposed formula, euclidean distance and even modifing the proposed formula with instead of max() using mean().
Could it be different libs versions?
@ofgagliardi are you training writer-dependent classifiers as described in the article, or just using the network to extract features and compare the features from different signatures? If you are doing the latter, I recommend checking out this paper https://arxiv.org/abs/1807.10755, that uses this network to extract features and train a single "writer-independent" classifier. I hope this helps.
I was tried with Pre-trained models,
with Cosine similarity between two image features (like: one genuine, one forgery signature).
real1.png, real2.png -> similarity: 0.31142098 real1.png, fake1.png -> similarity: 0.2714578 real1.png, fake2.png -> similarity: 0.6426417 real2.png, fake1.png -> similarity: 0.18943718 real2.png, fake2.png -> similarity: 0.6238067
here how i concluded is 60% above similarity means, both as verified, else rejected. but 2/5 cases successfully verified. 40% accuracy only i got it from pretrained model.
I have few doubts @luizgh, @gonultasbu and @atinesh-s sir. please help to resolve,
when i am testing with real time noisy images (like signature written in paper images) not getting good results. why? how to resolve these issue?
How i will improve the accuary of the model SigNet?
Thanks & Regards Murugan Rajenthiran