Plachtaa / FAcodec

Training code for FAcodec presented in NaturalSpeech3
148 stars 15 forks source link

How many steps would be enough if i train this model from start? #14

Open lixuyuan102 opened 1 month ago

lixuyuan102 commented 1 month ago

Hi! Nice work! Could you share how many steps would be sufficient to train a new model? I'm trying to train a 16k FAcodec. The results reconstructed by ckpt 130,000 still sound different from the real speech, especially for the speaker timbre.

lixuyuan102 commented 1 month ago

Here is the loss curve: 1721286015260

Plachtaa commented 1 month ago

The model released was trained for 670k steps, normally 400k would be sufficient for codec, according to descript-audio-codec's practice

lixuyuan102 commented 1 month ago

The model released was trained for 670k steps, normally 400k would be sufficient for codec, according to descript-audio-codec's practice

Thanks!

lixuyuan102 commented 1 month ago

I have trained the model on voxceleb2 for 400K steps. However, the reconstructed speech sounds not as good as the demo page and the reconstructed result of the noisy speech sounds even worse. Here are the samples: O1: https://github.com/lixuyuan102/FAcodec/blob/master/ZCwVV3niXxo_00179.m4a R1: https://github.com/lixuyuan102/FAcodec/blob/master/ZCwVV3niXxo_00179.wav O2: https://github.com/lixuyuan102/FAcodec/blob/master/Zsus9yFgaJM_00132.m4a R2: https://github.com/lixuyuan102/FAcodec/blob/master/Zsus9yFgaJM_00132.wav

Is there a problem with the data scale or something else?

Plachtaa commented 1 month ago

I have checked the samples you shared. One thing I am noticing is your samples sound quite noisy. I don't know whether they are from your train set or not, but I don't suggest to include anything else except clean vocal data, as FAcodec is designed for speech instead of a universal audio codec. If your speech data for training has not gone through a vocal separation process, it may indeed affect model performance.

lixuyuan102 commented 1 month ago

I have checked the samples you shared. One thing I am noticing is your samples sound quite noisy. I don't know whether they are from your train set or not, but I don't suggest to include anything else except clean vocal data, as FAcodec is designed for speech instead of a universal audio codec. If your speech data for training has not gone through a vocal separation process, it may indeed affect model performance.

Tks, I'll try to process my training data with denoise and separation model.

lixuyuan102 commented 1 month ago

I found that although the training data has been denoised, tuning the pertained Facodec on this still results in unstable pronunciation. Moreover, the unstable pronunciation seems more significant if the original audios sound poorer. Can I just tune the timbre module and freeze the other parts to make it adapt to new speakers?

lixuyuan102 commented 1 month ago

BTW, I find that only the content, prosody, and timbre latent features are used when training the Facodec Redecoder. May I ask why the z_r is not employed?