-
How can i get this file?
-
Dear All,
I tried to run "python train_speaker_embeddings.py hparams/train_ecapa_tdnn_big.yaml" after finish the training process. But I can't find the "mean_var_norm_emb.ckpt" file. How to generat…
-
I am trying to reciprocate this recipe to my custom dataset. But facing some errors because I don't have meta files for my custom dataset while running this command "python train_speaker_embeddings.py…
-
I trained my model using both _MHA_ and _DoubleMHA_, and what I observed is if I use different lengths of audio samples in training and testing then test evaluation results are really bad, close to r…
-
How many hours of speech do you need to get the quality as in paper?
-
hi @dvisockas @joonson, if we want to test with a new test list from a different dataset, Can we do it by changing only the test list?
-
hi
The ouput of `python train_speaker_embeddings.py hyperparams/train_x_vectors.yaml` doesnot have the file of "mean_var_norm_emb.ckpt", which will be used in "verification_plda_xvector.yaml"; is it(…
-
When I tested with the pre-trained checkpoint, I found that the L1 distance of the VoxCeleb and taichiHD datasets was better than the L1 distance in the paper. I want to know why.
Is the pre-trained…
-
**Issue by [mogwai](https://github.com/mogwai)**
_Saturday Oct 31, 2020 at 17:15 GMT_
_Originally opened as https://github.com/hbredin/pyannote-audio-v2/issues/56_
----
Create a way to convert Ligh…
-
Hello,
I am trying to do two things:
1) I am trying to fine-tune speaker embedding from VoxCeleb pre-trained model on a smaller dataset of around 60 speakers.
I wanted to try to freeze some la…