Open hieuhthh opened 1 month ago
I suppose you could run the code and send a prompt. How did you do it? I'm trying from a Google Colab and I can't because it asks me for a ckpt path. I hope you can help me
You need to train a model and modify path in generation.py
I followed the tutorial and trained my own model using approximately 300 hours of song accompaniment data. It converged well, but when I tried to generate a song from the best model, even using the same prompt input from the training set, it only generated noisy audio.
I checked the code and noticed that only the UNet1D is saved and loaded during inference, and the Diffusion model is not. Is there anyone who has successfully trained and can actually infer from their model who could offer me any tips? Thank you!