Open dzleon opened 1 year ago
I have the same issue.
This seems like a bug because it makes the previews worthless and make me think that the training failed to learn anything.
Or am I doing something wrong? Should I not include the embedding name in the prompt?
Is there an existing issue for this?
What happened?
When training a new textual inversion embedding, the previews generated keep using the first previewed embedding as the embedding is trained. This is obvious when selecting "Read parameters (prompt, etc...) from txt2img tab when making previews" and using a fixed seed.
This seems related to the new caching (cached_c, cached_uc) in processing.StableDiffusionProcessingTxt2Img, where it caches the first version of the embedding and reuses it, that caching would need to be disabled during training
Steps to reproduce the problem
What should have happened?
Previews should update as the embedding is trained. Instead, the same image is generated over an over, until you stop training and click to refresh the list of embeddings.
Version or Commit where the problem happens
version: v1.4.1
What Python version are you running on ?
Python 3.11.x (above, no supported yet)
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
Nvidia GPUs (GTX 16 below)
Cross attention optimization
Automatic
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
List of extensions
Controlnet, ddetailer, etc. None that should affect training.
Console logs
Additional information
No response