aishoot / LSTM_PIT_Speech_Separation

Two-talker Speech Separation with LSTM/BLSTM by Permutation Invariant Training method.
306 stars 90 forks source link

Using VCTK-dataset #14

Open AndyGaogao opened 5 years ago

AndyGaogao commented 5 years ago

How can I use the VCTK-dataset to train the model? Should I alter the structure of VCTK-dataset downloading from the origin webpage? Thanks for your reply.

aishoot commented 5 years ago

@AndyGaogao yes

tyoc213 commented 4 years ago

What is the original page?? I found it here https://pytorch.org/audio/_modules/torchaudio/datasets/vctk.html

But I think there was a previous one with 84 voices now it is more than 100... Im not sure :)