AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
135.49k stars 25.87k forks source link

[Bug]: Textual Inversion Error #9638

Open Nyc789 opened 1 year ago

Nyc789 commented 1 year ago

Is there an existing issue for this?

What happened?

I keep trying to follow along with Aitrepreneur's tutorial and how to create a Textual Inversion file but every time I get to preprocess this error appears

Steps to reproduce the problem

  1. Go to stable diffusion train tab then the preprocess tab and after loading up 1.5 pruned file(I've tried both with the safetensors version and without) put in the directories you want to take/send the images to.
  2. Press the Blip Checkmark and then press Preprocess.
  3. Wait

What should have happened?

It should have started preprocessing and given me the processed images in the other folder.

Commit where the problem happens

22bcc7be

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

Other than the python path I haven't put anything else.

List of extensions

No this was a brand new install that I uninstalled and installed again only for the same problem to happen.

Console logs

venv "C:\Users\Jane\JaneSD\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Installing requirements for Web UI
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
Loading weights [e1441589a6] from C:\Users\Jane\JaneSD\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned.ckpt
Creating model from config: C:\Users\Jane\JaneSD\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0):
Model loaded in 18.6s (load weights from disk: 5.8s, load config: 0.1s, create model: 1.6s, apply weights to model: 7.5s, apply half(): 0.9s, move model to device: 0.9s, load textual inversion embeddings: 1.7s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 25.4s (import torch: 1.7s, import gradio: 1.0s, import ldm: 0.6s, other imports: 0.9s, load scripts: 1.0s, load SD checkpoint: 19.1s, create ui: 0.8s, gradio launch: 0.2s).
Downloading: "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_caption_capfilt_large.pth" to C:\Users\Jane\JaneSD\stable-diffusion-webui\models\BLIP\model_base_caption_capfilt_large.pth

Error completing request
Arguments: ('task(rw6d8334cme9obz)', 'C:\\Users\\Jane\\Desktop\\birme-512x512', 'C:\\Users\\Jane\\Desktop\\birme-512x512\\Images', 512, 512, 'ignore', False, False, True, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1) {}
Traceback (most recent call last):
  File "C:\Users\Jane\JaneSD\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\Jane\JaneSD\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\Users\Jane\JaneSD\stable-diffusion-webui\modules\textual_inversion\ui.py", line 19, in preprocess
    modules.textual_inversion.preprocess.preprocess(*args)
  File "C:\Users\Jane\JaneSD\stable-diffusion-webui\modules\textual_inversion\preprocess.py", line 17, in preprocess
    shared.interrogator.load()
  File "C:\Users\Jane\JaneSD\stable-diffusion-webui\modules\interrogate.py", line 123, in load
    self.blip_model = self.load_blip_model()
  File "C:\Users\Jane\JaneSD\stable-diffusion-webui\modules\interrogate.py", line 103, in load_blip_model
    blip_model = models.blip.blip_decoder(pretrained=files[0], image_size=blip_image_eval_size, vit='base', med_config=os.path.join(paths.paths["BLIP"], "configs", "med_config.json"))
IndexError: list index out of range

Additional information

I did train the face on the dreamsharper model and inpainted it on a bunch of other outfits using the same model to get different pictures for the AI and then switched over to 1.5 for the training if that makes a difference.

shuishu commented 1 year ago

我也一样的问题/