Closed ybtang56 closed 1 year ago
Hello @ybtang56 . Thanks for spotting this - the code is right (in the sense that this was the code used in the experiments), and Table 1 in the paper is wrong - I wrote that table to describe the code, but looks like I got this part wrong. I don't think I can change the published paper, but I will keep this in mind for an update to the arXiv version. Thanks
Hi Luiz, thank you very much for your reply. I have another question about the number of samples per user in WD (writer dependent). In your paper, each user has 10 samples in the experiment. Well, may i ask what if reduce the number of samples, what's the effectiveness on the EER? For example, take 5 samples per user, how much the EER will get?
This was investigated in one of the papers: https://arxiv.org/abs/1705.05787. We even plot curves varying the number of samples per user from 1 to 10, for 4 datasets. FYI, the code to replicate the training/testing of WD classifiers is now publicly available here: https://github.com/luizgh/sigver
https://github.com/luizgh/sigver_wiwd/blob/3e509df4cebc5d8dbb083373a017e7d0cea4f0be/signet_spp_300dpi.py#L19
As shown in your paper "Table 1", this pooling layer is defined as "pool3-s2-p0", as such the code should be like this right?
net['large_pool4'] = MaxPool2DLayer(net['large_conv4'], pool_size=3, stride=2)
If not, may i ask why?