AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
141.4k stars 26.72k forks source link

[Bug]: Previews not picking up new embeddings when training textual inversion #11993

Open dzleon opened 1 year ago

dzleon commented 1 year ago

Is there an existing issue for this?

What happened?

When training a new textual inversion embedding, the previews generated keep using the first previewed embedding as the embedding is trained. This is obvious when selecting "Read parameters (prompt, etc...) from txt2img tab when making previews" and using a fixed seed.

This seems related to the new caching (cached_c, cached_uc) in processing.StableDiffusionProcessingTxt2Img, where it caches the first version of the embedding and reuses it, that caching would need to be disabled during training

Steps to reproduce the problem

  1. Set up a new embedding for training.
  2. Go to txt2img tab and set up an image that uses the new embedding, select a fixed seed.
  3. Go to training tab. select the new embedding and set up for a normal training. select "Read parameters (prompt, etc...) from txt2img tab when making previews"
  4. Start training

What should have happened?

Previews should update as the embedding is trained. Instead, the same image is generated over an over, until you stop training and click to refresh the list of embeddings.

Version or Commit where the problem happens

version: v1.4.1

What Python version are you running on ?

Python 3.11.x (above, no supported yet)

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

Nvidia GPUs (GTX 16 below)

Cross attention optimization

Automatic

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

No

List of extensions

Controlnet, ddetailer, etc. None that should affect training.

Console logs

N/A

Additional information

No response

JarnoLeConte commented 6 months ago

I have the same issue.

This seems like a bug because it makes the previews worthless and make me think that the training failed to learn anything.

Or am I doing something wrong? Should I not include the embedding name in the prompt?