explainingai-code / StableDiffusion-PyTorch

This repo implements a Stable Diffusion model in PyTorch with all the essential components.
122 stars 25 forks source link

How to improve text-conditioned generation? #17

Open Nikita-Sherstnev opened 3 months ago

Nikita-Sherstnev commented 3 months ago

I see that model not very good at text conditioned generation. How to improve this situation? Maybe train CLIP model itself, or just train ldm for longer?

explainingai-code commented 3 months ago

When I trained this on Celeb captions, I also found that for the captions that are very common(like hair color), the trained text conditioned diffusion model was performing very well for them. But for words which weren't quite frequent , the model wasn't honouring them at all. I suspect training the ldm longer(or getting more images for the infrequent captions) should indeed improve the generation results for them. You can definitely try training CLIP as well but I feel unless you have very rare words in your captions (or maybe very different from what clip was trained on), training ldm for longer should be more fruitful than training CLIP model.