Closed Mlallena closed 2 years ago
Hi @Mlallena,
It makes sense that the model throws that error as it expects an I-vector whose dimensions are the one you mentioned. More info on how we computed them can be found in section 2 of this Readme file https://github.com/hechmik/voxceleb_enrichment_age_gender/blob/main/notebooks/README.md.
In order to use my own audios in order to check and/or finetune your model, what would I have to do with the audios? Do I need to obtain their MFCC file? Is there a method to directly input the audio filepath so the internal process obtains the input for the model?
Thanks for your previous answer. As I said earlier, I'll try to find more, but an answer is welcome.
Sorry for the delay in the response but I was at work and I didn't have time to get back to you until now. Basically the procedure you need to follow is the following:
As said in the README.md file we used the Asvtorch tool for doing all these steps, as it was the easiest option for processing Voxceleb recordings. In your scenario you'll need to modify this library a little bit, however I didn't have the chance to do it as we always worked inside the "VoxCeleb" ecosystem.
A good starting point is the description of the actual steps needed for computing i-vectors, which you can find here. The solution proposed in our paper is "Voxceleb"-dependent, as we used the not-labeled records for training the various extractors: in my opinion you could replicate the other steps also on other datasets, even though results won't likely be the same.
I hope I was clear enough!
I am trying to use the gender recognition model shown here ('ivec_log_reg_model.torch'), but the method suggested runs into an error:
Replacing (512,1) with (400,2) in the example does seem to work. Now, the problem is that there's no mention of how to test it with your own audios. I'll see if I can find it, but any suggestion would be welcome.