Open AI-Guru opened 1 year ago
use_text_conditioning=False
and provide your own embedding with embedding=...
. See here if you want to make your own plugin for the UNet the num of paras in text condition model is only 562M rather than 857M in mousai paper, is there any extra config in text condition model?
Hi!
I have worked with unconditional generation using this fine repo. It is a lot of fun! I will do latent diffusion next. I am already looking forward to it.
Text conditional generation promises a lot of fun. I have a few questions.
In the README, in the conditional section, we can read "Text conditioning, one element per batch", this means "one text per waveform" and thus "a batch of texts for a batch of waveforms", right? Not "one text for a batch of waveforms"?
I believe latent diffusion and text conditioning to be orthogonal. Is it safe to assume that DiffuserAE would work with text conditioning by just adding the right kwargs?
What would be necessary in order to replace the T5 embeddings with something else?
What would be the consequences of extending the number of tokens for T5?
This is so cool!
Best, Tristan