Open smalissa opened 5 years ago
also if you can edit the documentation more in README.md file to show the steps foe using this code?
config and run gen_mel.py, you can get mel spectrogram from wav
ok this for one audio file, but in case i have data set of audio file< how i can get all the mel spectrogram for wav at once? after that how i can use autovc after i get the mel spectrogram , can i use the pre-trained WaveNet Vocoder mode from autovc?
1.you can implement batch process by your self.
2.after you generate the mel spectrograms ,you can follow the tips in "auto-vc" README to train your voice-conversion model .
3.from mel soectrogran to generate wav ,run the vocoder.ipython ,you can use the pre-trained wavenet vocoder from autovc,either you can train you own wavenet vocoder. follow the step in README.
ok, thank you but how i can implement batch process, and what the aim from this? how i can generate my own wavenet vocoder to convert mel soectrogran to generate wav ,how i can have my own the vocoder.ipython due to my data set in another language (arabic)?
@miaoYuanyuan Thank you for your work, this really help me with extract spectrogram features from wav, however the feature extracting using this is different from which got from result.pkl in the project AutoVC https://github.com/auspicious3000/autovc. So have you reproduce the project autoVC using vctk dataset and get the same good quality as their demo? Or train your model using another dataset then get good results?
@miaoYuanyuan Thank you for your work. I am having the same issue as @xw1324832579 mentioned. I tried running gen_mel.py to generate the spectrogram and reconstructing the wav file using r9y9's wavenet vocoder(I actually ran the pre-trained model under the AutoVC repo) but the result contains some obvious artifacts. Have you successfully got the same good result as their demo?
@miaoYuanyuan thank you for your work and for mention me also. but this is the code . How i can help from it to use AutoVa technic ? If you can guide me? thanks