Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.
I've done some fine-tuning of musicgen-small, but I want to resume training from the checkpoint I exported (i.e., as state_dict.bin). I'm not sure how to do that. I tried:
since the base.pyStandardSolver seems to have a load_checkpoints function which mentions //pretrained/... but those options just load from the last checkpoint in my tmp folder, with the previous (finished) scheduler, so although training begins the learning rate is zero....
If I use --clear in the dora call, then I lose my previous checkpoints... this should be easy (and presumably it is, but I can't find any documentation on it).
I've done some fine-tuning of
musicgen-small
, but I want to resume training from the checkpoint I exported (i.e., asstate_dict.bin
). I'm not sure how to do that. I tried:continue_from: /home/james/src/audiocraft/test_model/3genre
in my solver config, but that seems to be ignored. I also tried:
continue_from: //pretrained/home/james/src/audiocraft/test_model/3genre
since the
base.py
StandardSolver
seems to have aload_checkpoints
function which mentions//pretrained/
... but those options just load from the last checkpoint in mytmp
folder, with the previous (finished) scheduler, so although training begins the learning rate is zero....If I use
--clear
in the dora call, then I lose my previous checkpoints... this should be easy (and presumably it is, but I can't find any documentation on it).