sigsep / open-unmix-pytorch

Open-Unmix - Music Source Separation for PyTorch
https://sigsep.github.io/open-unmix/
MIT License
1.28k stars 192 forks source link

sourcefolder training #4

Closed HIN0209 closed 5 years ago

HIN0209 commented 5 years ago

Hello, again. @sigsep: Sorry to bother you, but I should have another novice mistake on training a "sourcefolder" dataset. Specifically, I am using DCASE2013_subtask2/singlesounds_stereo that has 320 wav files containing 16 classes of environmental noises (alert, clearthroat, cough, etc, 20 files each).I separated them into different folders according to the noise labels (./DCASE2013 (as root)/train/alert/alert01.wav, alert02.wav), etc.

When I tried the following comand, the error occured. Command: python train.py --dataset sourcefolder --root ./DCASE2013 --target-dir alert --interferer-dirs clearthroat cough

Error message: Using GPU: True 100%|█████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 36.16it/s] 100%|██████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 10745.44it/s] 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last): File "train.py", line 291, in main() File "train.py", line 174, in main scaler_mean, scaler_std = get_statistics(args, train_dataset) File "train.py", line 66, in get_statistics x, y = dataset_scaler[ind] File "/xxx/open-unmix-pytorch/data.py", line 367, in getitem source_path = random.choice(self.source_tracks[source]) File "/xxx/anaconda3/envs/open-unmix-pytorch-gpu/lib/python3.7/random.py", line 261, in choice raise IndexError('Cannot choose from an empty sequence') from None IndexError: Cannot choose from an empty sequence


Am I missing something? Looks like it does not find the training files.

faroit commented 5 years ago

Hi, thanks for your input. We are really keen on people trying the other datasets. So, feel free to comment on the API/documentation on this as well.

Regarding the issue, I think the the command line looks good (did you make sure the --ext flag is set to wav?)

I just tested with the following folder structure: image

running

python train.py --root ~/data --dataset sourcefolder --interferer-dirs noise --target-dir speech --seq-dur 1.0 --nb-channels 1

This worked for me. One more thing (when you figures out the problem) is that the files would need to be of the same length due as the --seq-dur is ignored for calculating the statistics. You will notice if this is a problem if torch.stack fails. If that is the case you might want to comment out this line

HIN0209 commented 5 years ago

@faroit, thanks for the quick reply. I changed the dataset to DCASE2018 (TUT-urban-acoustic-scenes-2018-development) that has identical length of audio files, instead of DCASE2013 containing variable duration. I added --seq-dur 3 --nb-channels 1 and it works! So the duration of data files appears to be the case.

By the way, I commented out the two lines as suggested, but an error came that "pbar is not defined" in the next line "for ind in pbar:"

My comments on documentation are the following:

  1. How can I plot the loss curves? Using tensorboard?
  2. --nb-train-samples: does it mean the total number of training data AFTER augmentation, right?
faroit commented 5 years ago

By the way, I commented out the two lines as suggested, but an error came that "pbar is not defined" in the next line "for ind in pbar:"

Sorry I pointed to two lines, I meant just L63

How can I plot the loss curves? Using tensorboard?

the loss values are written in the json files and can be easily plotted with your own code such as

        import json
        with open('vocals.json', 'r') as stream:
            r = json.load(stream)

        plt.plot(r['valid_loss_history'], label='validation loss')

Currently we don't plan to make tensorboard support to keep the code small and lean.

--nb-train-samples: does it mean the total number of training data AFTER augmentation, right?

Augmentation is always applied to all samples. So yes, this includes augmentation.

HIN0209 commented 5 years ago

Thank you again.!!