Open taylerpauls opened 5 years ago
Hi @taylerpauls, I'm also working through this repo at the moment!
If I understand you correctly, I think all you have to do is put your audio files (.mp3s etc) in a new folder under train_samples. So when you instantiate WavenetDataset, you call it like this:
data = WavenetDataset(dataset_file='train_samples/<insert_folder>/<insert_folder_name>.npz', item_length=model.receptive_field + model.output_length - 1, target_length=model.output_length, file_location='train_samples/<insert_folder>', test_stride=500)
Thank you!!
Could you help me understand how the audio_data.py splits up the training vs testing data set ? Also , if I load a model that is snapshotted would I load the same data file used to train to also validate ?
On May 12, 2019, at 5:55 AM, thinking-tower notifications@github.com wrote:
Hi @taylerpauls https://github.com/taylerpauls, I'm also working through this repo at the moment!
If I understand you correctly, I think all you have to do is put your audio files (.mp3s etc) in a new folder under train_samples. So in when you instantiate WavenetDataset, you call it like this:
data = WavenetDataset(dataset_file='train_samples/
', item_length=model.receptive_field + model.output_length - 1, target_length=model.output_length, file_location='train_samples/ ', test_stride=500) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/vincentherrmann/pytorch-wavenet/issues/26#issuecomment-491581447, or mute the thread https://github.com/notifications/unsubscribe-auth/AH6VJQYRA7WQKIMEHJKQ5NLPU7SSJANCNFSM4HLJYEEQ.
@taylerpauls
I'm not too sure about how that works yet... Sorry.
Hi !!
if I want to use several different examples in my own dataset (ex. 4 different audio clips) do I need to concatenate them and then put them in a npz file ?
-Tayler