Closed lyghter closed 3 years ago
Hi @lyghter ,
random seeds.
lasaft_large_2020.ckpt is a checkpoint that produces the lowest validation loss when we trained a lasaft model deterministically with a random seed of 2020.lasaft_large_2021.ckpt
. --seed 2021 sets the random seed to be 2021.
main.py --problem_name conditioned_separation --mode train --musdb_root ../repos/musdb18_wav --n_blocks 9 --num_tdfs 6 --n_fft 4096 --hop_length 1024 --precision 16 --embedding_dim 64 --pin_memory True --save_top_k 3 --patience 10 --deterministic --model lasaft_net --gpus 4 --distributed_backend ddp --sync_batchnorm True --run_id lasaft_2021_als --batch_size 4 --seed 2021 --log wandb --lr 0.0001 --auto_lr_schedule True
Hi, @lyghter again, I think your questions are related to the music-demixing-challenge-ismir-2021. Have you submitted lasaft for it? If you have, can you share the SDR scores with us?
@ws-choi MDX organizer here, we would love to have your model as another baseline, that we would officially tag. If you have time, can you prepare a fork of the starter kit?
Hi @faroit ,
It will be a great honor for us if our model is listed as a baseline. The authors have been working for it since Monday, but unfortunately, we found that large LaSAFT+GPoCM models could not meet the time limitation (running time per each track >> 4m). Since we had not saved checkpoints of small models described in the paper, we've re-trained them. It takes at least 1 week from now to submit our models.
@ws-choi okay, no worries. Did you try compiling it with torchscript or export to onnx to speed up inference?
@faroit no I have not tried them yet. I will try it, thank you for the recommendation! :)
1) Are lasaft_large_2020.ckpt and lasaft_large_2021.ckpt trained on "train" part of musdb18 or on full musdb18 ("train" and "test")? 2) What is the difference between these models?