maum-ai / voicefilter

Unofficial PyTorch implementation of Google AI's VoiceFilter system
http://swpark.me/voicefilter
1.08k stars 227 forks source link

Question about start point of SDR #9

Closed lycox1 closed 5 years ago

lycox1 commented 5 years ago

Dear @seungwonpark

First of all, I would like to thank you for great open source. I would like to test your nice code and I tried to train voice filter.

But i get the problem with SDR. When i saw SDR graph in voicefilter github, SDR value from 2 to 10dB. But in my case, SDR value is from -0.8 to 1.2.

image

I am trying to find the cause of the problem but I can not find it.

Can you help me to find the cause of problem?

I used the default yaml and generator.py. ( train-clean-100, train-clean-360, dev-clean are used to train)

Could you let me know what i can check?

Thanks you!

seungwonpark commented 5 years ago

Hi, @lycox1 Thanks for your interest in VoiceFilter open-source repo.

As discussed in #5, SDR may significantly differ from results in README since it's measured from the random sample. Please refer to Jungwon Seo's comment here: https://github.com/mindslab-ai/voicefilter/issues/5#issuecomment-497746793

lycox1 commented 5 years ago

Thanks @seungwonpark I already read #5.

I think that key checkpoint of #5 are below

  1. train-other-500 don't use for training. Just use train-clean-100 and train-clean-360 --> I use train-clean-100, train-clean-360 and dev-clean
  2. comparing to the published sample. (origianl paper's sample https://google.github.io/speaker-id/publications/VoiceFilter/). --> I checked the dev_tuples.csv and train_tuples.csv (https://github.com/google/speaker-id/tree/master/publications/VoiceFilter/dataset/LibriSpeech). Files in dev-clean are exist in dev_tuples.csv but files in train-clean-100 and train-clean-360 don't exists in dev_tuples.csv and train_tuples.csv.

Could plz let me know if you have any other clue!

Thanks.

lawlict commented 5 years ago

Hello @seungwonpark , I also get the similar problem with @lycox1 . Could you please give me a hand? I almost follow all the README steps, except that the suffix of audios in LibriSpeech is .flac, so I changed 24th line of normalize-resample.sh from "for f in $(find . -name ".wav"); do" to "for f in $(find . -name ".flac"); do".

Since I clone down the newest code, train-other-500 has been removed. By the way, I notice that in README the number of test cases is 1000, while the code use only 100 test cases.

Here are the images of the training loss, test loss and test SDR in my experiment. Although the test data may be different, I believe that a correct training loss curve should be similar, right? image image image

seungwonpark commented 5 years ago

Hi, @lawlict

The test loss curve may fluctuate since we didn't perform the evaluation for a sufficient amount of data. So I think the curve may look bit different.