Open MahlerMozart opened 1 month ago
Hi, the link is the checkpoint of MusiConGen. You can also modify the MusicGen-melody checkpoint to train with your own dataset by aligning the key and weight shape with the provided checkpoint.
Hi, the link is the checkpoint of MusiConGen. You can also modify the MusicGen-melody checkpoint to train with your own dataset by aligning the key and weight shape with the provided checkpoint.
Thank you for the fast reply! Could you please elaborate more on the details of the checkpoint and training process? Here is what I have found.
I am very interested in your work and truly appreciate your patience and help on the above questions. Thank you!
Sorry for the late reply.
Can you show the difference of the output weight format?
The training checkpoint is trained checkpoint from modified version of MusicGen-melody model. The inference checkpoint is the exported checkpoint(bin file) from training checkpoint.
No, before your own training, you have to modify some layers of pretrained MusicGen-melody checkpoint corresponding to MusiConGen training weight at https://huggingface.co/Cyan0731/MusiConGen_training/tree/main
Yes, in my experiment, the training model will converge at when CE loss is about 3.2. The validation loss is about 3.5. However, to determine whether the model is well-trained, listening test is the most accurate evaluation.
Hello, great work! one question about the training weight of MusiConGen provided at the following link, is it the weight of MusicGen-melody(1.5B) or the checkpoint of MusiConGen? Thank you! https://huggingface.co/Cyan0731/MusiConGen_training/tree/main