Hi, Yuxin Wu.
Thank you for publishing a cool library.
Unfortunately, I faced a problem while testing to use it.
The test environment is as follows.
Four voices were added to the learning set, and each voice consists of two to four files.
and Voices are registered through the command,
"speaker-recognition.py -t enroll -i "./voice/*" -m model.out"
enrollment was successfully completed.
Then, when prediction was performed with unused file from the test process,
the following results were obtained.
The results of all test sets appear to be the same.
One suspicious point is that seonyoung's learning data was the longest.
Do you have any idea what is the problem?
Hi, Yuxin Wu. Thank you for publishing a cool library.
Unfortunately, I faced a problem while testing to use it. The test environment is as follows. Four voices were added to the learning set, and each voice consists of two to four files. and Voices are registered through the command, "speaker-recognition.py -t enroll -i "./voice/*" -m model.out" enrollment was successfully completed.
Then, when prediction was performed with unused file from the test process, the following results were obtained.
./voice/test/hoseok/hoseok.wav -> seonyoung [failed] ./voice/test/christi/christi.wav -> seonyoung [failed] ./voice/test/seongjun/seongjun.wav -> seonyoung [failed] ./voice/test/seonyoung/seonyoung.wav -> seonyoung ./voice/test/ziye/ziye.wav -> seonyoung [failed]
The results of all test sets appear to be the same. One suspicious point is that seonyoung's learning data was the longest. Do you have any idea what is the problem?