Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.
Hi, I want to train an Encodec model (only Encodec, not included in MusicGen or AudioGen) using my own datasets. I followed the instructions in this doc, but to be honest, I feel very confused.
Under the root repository, I ran dora grid compression.encodec_base_24khz, then this error occured:
omegaconf.errors.InterpolationResolutionError: KeyError raised while resolving interpolation: Environment variable USER not found. full_key: dora.dir, object_type: dict
Does anyone know about this error?
Apart from the above error, how to set the path to my dataset? Which configuration file should be changed?
I feel very confused about the dora grid compression.encodec_base_24khz command. In my understanding, this command will call the configuration file, but the configuration file doesn't include any training file, nor the path to the training data... Should the train.py under the audiocraft directory also be called somewhere? Can anyone indicate all the commands used to train an Encodec model using our personal datasets? Thanks!
If dora grid doesnt work, try dora run. Maybe your PC/server doesnt support grid.
To set path to your own dataset, first need to create the dataset json file as the doc writes. Then, set path in config/dset/...yaml and call the yaml from solver/compression/...yaml.
Hope these hints help.
Hi, I want to train an Encodec model (only Encodec, not included in MusicGen or AudioGen) using my own datasets. I followed the instructions in this doc, but to be honest, I feel very confused. Under the root repository, I ran
dora grid compression.encodec_base_24khz
, then this error occured:omegaconf.errors.InterpolationResolutionError: KeyError raised while resolving interpolation: Environment variable USER not found. full_key: dora.dir, object_type: dict
Does anyone know about this error? Apart from the above error, how to set the path to my dataset? Which configuration file should be changed? I feel very confused about thedora grid compression.encodec_base_24khz
command. In my understanding, this command will call the configuration file, but the configuration file doesn't include any training file, nor the path to the training data... Should thetrain.py
under the audiocraft directory also be called somewhere? Can anyone indicate all the commands used to train an Encodec model using our personal datasets? Thanks!