I use the pretrain model and its always very good effect except some special case, fine tuning need in my case.
Now I train the model like this:
1, download dns interspeech2020 branch data, split clean data to 6s length and 3s overlap, then every epoch model meet 500h noisy data;
2, I use 2080Ti * 8 and batch size set to 8. Other param just like train.toml;
3, the train loss look like this
Is there any other preprocess on the orign train data? And any advice for train in this little batch?
Thank you
Hi,
I use the pretrain model and its always very good effect except some special case, fine tuning need in my case. Now I train the model like this: 1, download dns interspeech2020 branch data, split clean data to 6s length and 3s overlap, then every epoch model meet 500h noisy data; 2, I use 2080Ti * 8 and batch size set to 8. Other param just like train.toml; 3, the train loss look like this Is there any other preprocess on the orign train data? And any advice for train in this little batch? Thank you