Thank you for sharing your code. However, I have a few questions.
I tested my own checkpoints from the 20th and 47th training epochs, and their EER was around 8%.
In the checkpoint you shared, is the hparams.yaml file the configuration you used for training? I noticed that some parameters differ from the default arguments in the code, which might explain the higher EER in my reproduced model.
Also, which checkpoint achieved the lowest EER in your implementation? Was continued training used?
Thank you for sharing your code. However, I have a few questions.
I tested my own checkpoints from the 20th and 47th training epochs, and their EER was around 8%.
In the checkpoint you shared, is the hparams.yaml file the configuration you used for training? I noticed that some parameters differ from the default arguments in the code, which might explain the higher EER in my reproduced model.
Also, which checkpoint achieved the lowest EER in your implementation? Was continued training used?
Thank you very much for your response.