Open kay9012 opened 1 month ago
Hi,
The results are obtained by running main.py test after training.
You can refer to generate_eval_pair.py to distinguish s2s and u2u. It generates metafiles for testing s2s and u2u. It splits speakers for s2s and u2u for the VCTK dataset.
Thanks.
Hi,
The results are obtained by running main.py test after training.
You can refer to generate_eval_pair.py to distinguish s2s and u2u. It generates metafiles for testing s2s and u2u. It splits speakers for s2s and u2u for the VCTK dataset.
Thanks.
Thank you for your responding! ! And how about the Speaker Verification? I use the Resemblyzer/demo05_fake_speech_detection.py to get it. Is it correct?
I think it uses a similar process. You can check ./src/metric.py to get scores related to speaker verification.
Thanks.
Is the evaluation data in the thesis table in the article obtained with main.py test after training, or is it obtained by evaluating the results using the conevert? And how to distinguish between s2s and u2u?