Closed HW140701 closed 2 years ago
Thanks for your attention, we do not have clear plan to release relevant code on the CSL dataset, perhaps after the publication of the journal version. The total data process and training part are the same on different datasets (details can be found in the paper), and there is an evaluation trick on CSL due to its signer-independent setting: from my experience, using model.train() achieve better performance than model.eval().
Thank you very much for your reply, looking forward to the new paper. I will verify the CSL dataset with reference to the details mentioned in the paper.
You can post issues if you meet any problems during implementation. Good luck :)
Ok. Thanks a lot for your help.
I wonder if we should use 'model.train()' when evaluation. As far as i know, 'model.train()' enables backward-gradient and some special components (e.g. dropout), but seems to make no substantial change for VAC.
It is about the statistic used by BN.
Thank you very much for your contribution to the community. In the paper, I saw that experiments were carried out on both the PHOENIX14 dataset and the CSL dataset. I would like to ask if there are plans to supplement the data processing part and the training part of the code on the CSL dataset?