Jungjee / RawNet

Official repository for RawNet, RawNet2, and RawNet3
MIT License
357 stars 55 forks source link

how to evaluate your implementation with a different dataset #30

Closed rosana-sc7 closed 1 year ago

rosana-sc7 commented 1 year ago

Hi!

I'm trying to make an evaluation of your implementation with a different dataset ( therefore with different test_path and test_list). What is the correct way to do so without modifying the default paths in trainSpeakerNet.py? I mean, what the command to evaluate your implementation should look like? Something like this? --> python ./trainSpeakerNet.py --test_path /path_to_test --test_list /path_to_the_list

Sorry for my ignorance, I'm new at programming in python and neural networks. Looking forward to your response, Thank you!

Jungjee commented 1 year ago

Hi @rosana-sc7, current code doesn't support evaluating on different datasets. However, you should be able to do this with a small modification on the existing code.

Refer to below part. https://github.com/Jungjee/RawNet/blob/a49fc21942dd3414ac2100f26f5c208379f90adc/python/RawNet3/infererence.py#L61 https://github.com/Jungjee/RawNet/blob/a49fc21942dd3414ac2100f26f5c208379f90adc/python/RawNet3/infererence.py#L107

Note that your evaluation protocol should be in the same format with current VoxCeleb1-O protocol.