Open akshayaCap opened 6 years ago
@akshayaCap Hello, I have uploaded .wav results of 2 speakers - "6-separated_result_BLSTM" and "7-separated_result_LSTM". As you say "with multiple speaker with added noise", one speaker can be regarded as target speaker while other speakers can be viewed as noise. And the algorithms are in first two folders.
@pchao6 thanks for your reply. Input files are missing in these folders. Can you please add them?
@akshayaCap I'm sorry for sharing the input files. The input dataset WSJ0 needs paid license. There's the WSJ0 corpus website where you can purchase.
@pchao6 thank-you for clarification.
@akshayaCap Thanks for your interest in my work. Fisrtly, I haven't done a separation experiment on the VCTK dataset, but you can try it. Secondly, you can just replace one of the two speakers' .wav files with noise data when creating a mixed data set. Other experiment settings, such as codes, are the same. You can try it.
Hi, I was going through your repository. I could not find results of LSTM and BLSTM on the 2 speaker .wav (audio files) generated by you. Can you please add them?
Also, have you tried this algorithm with multiple speaker with added noise? If yes, can you share the results?