Hi, thanks for sharing this awesome work! A generative music model with a chord control is definitely a huge progress.
Currently I am trying to train the model by myself with my own data. Here are some questions:
Which dataset did you use? If it cannot be revealed, could you describe the approximate size the data?
Did you fine-tined the musicgen-chord from scratch? If not, I saw there's a part for fine-tuning, but there's no description about which MusicGen model you used (maybe it's fine-tuned from melody model). Is the fine-tuning instructions for training on current model or training on Meta's MusicGen model?
If the musicgen-chord model is trained from scratch, could you share that how long it took and what machines did you use?
Hello @wayne391 !
Thanks for asking! Here are the answers for your questions.
The model is not actually fine-tuned. The model weights are brought from Meta's MusicGen-Melody model. I just tweaked the melody conditioning part by re-purposing the one-hot melody condition vectors into multi-hot chord condition vectors.
The answer above might be the answers these question too.
Hi, thanks for sharing this awesome work! A generative music model with a chord control is definitely a huge progress.
Currently I am trying to train the model by myself with my own data. Here are some questions:
Looking forward to your reply :3