Closed euniwang333 closed 5 months ago
After add data augmentation and set --niter to 0, I can get better result. But it still contain vocals sound in my accompaniment.wav result Is there any suggestion to improve the training result? Thanks
Traceback (most recent call last):
File "train.py", line 376, in
Traceback (most recent call last): File "train.py", line 376, in main() File "train.py", line 259, in main scaler_mean, scaler_std = get_statistics(args, encoder, train_dataset) File "train.py", line 78, in get_statistics x, y = dataset_scaler[ind] File "/home/adityamishra19/miniconda3/envs/umx-gpu/lib/python3.7/site-packages/openunmix/data.py", line 473, in getitem source_path = random.choice(self.source_tracks[source]) File "/home/adityamishra19/miniconda3/envs/umx-gpu/lib/python3.7/random.py", line 261, in choice raise IndexError('Cannot choose from an empty sequence') from None IndexError: Cannot choose from an empty sequence while training my model i am getting this as an error, how can i solve this?
Hi, please check if your data path is correct. If there're .wav file in your data folder, please add --is-wav if you use default dataset format Hope it can help you
I am facing the same issue of IndexError while training the model with my own dataset. I have checked for the data path as well, still I'm not able to resolve this issue. I'm using WSL and the data is in wav format.
I have written following in the terminal to train the model:
python train.py\
--dataset sourcefolder\
--output open-unmix-512\
--root ../openunmix/data\
--target-dir podcasts\
--interferer-dirs interfer\
--ext .wav\
--nb-train-samples 1800\
--nb-valid-samples 100
I am facing the same issue of IndexError while training the model with my own dataset. I have checked for the data path as well, still I'm not able to resolve this issue. I'm using WSL and the data is in wav format.
I have written following in the terminal to train the model:
python train.py --dataset sourcefolder --output open-unmix-512 --root ../openunmix/data --target-dir podcasts --interferer-dirs interfer\ --ext .wav --nb-train-samples 1800 --nb-valid-samples 100
Hi, I haven't used sourcefolder dataset before. Maybe you can try the trackfolder_fix dataset for instead.
@bhanshi can this be closed?
yes
Thanks for your great work! But when I try to train your code by myself, it didn't get a good result I want to get ["vocals", "accompanimnet"] result from mixture file My data format is like
The command I use is for accompaniment model is:
python scripts/train.py --root ../data/musdb18hq --dataset trackfolder_fix --interferer-files vocals.wav --target-file accompaniment.wav --epochs 1000 --output my_model
I can see the loss value decrease to 0.00x But when I inference through cli, I get the result which is similar to mixture file after I finish my training I'm wondering if there's any problem for my training script Thanks so much