[X] The issue exists after disabling all extensions
[X] The issue exists on a clean installation of webui
[ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
[X] The issue exists in the current version of the webui
[X] The issue has not been reported before recently
[ ] The issue has been reported before but has not been fixed yet
What happened?
Whenever I attempt to create a new embedding using the textual inversion feature, the process fails with a TypeError. The error indicates that list indices must be integers or slices, not str. This issue arises specifically when the forward method of an encoder module attempts to process a batch with emb_out = embedder(batch[embedder.input_key]).
Steps to reproduce the problem
Navigate to the Training tab.
Select "Create embedding".
Proceed to create a new embedding.
What should have happened?
The .pt file for the new embedding should have been created successfully without any TypeError. It should pass all internal checks, similar to how old embeddings are handled.
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
Loading weights [31e35c80fc] from /content/temp_models/sd_xl_base_1.0.safetensors
Running on local URL: https://virtually-preston-pressing-fraction.trycloudflare.com
✔ Connected
Startup time: 14.9s (import torch: 7.5s, import gradio: 0.9s, setup paths: 1.9s, initialize shared: 0.2s, other imports: 1.1s, load scripts: 1.3s, initialize extra networks: 0.2s, create ui: 1.1s, gradio launch: 0.4s, add APIs: 0.4s).
Creating model from config: /content/gdrive/MyDrive/sd/stablediffusion/generative-models/configs/inference/sd_xl_base.yaml
Applying attention optimization: sdp... done.
Model loaded in 8.1s (load weights from disk: 3.4s, create model: 0.9s, apply weights to model: 2.8s, move model to device: 0.1s, load textual inversion embeddings: 0.4s, calculate empty prompt: 0.4s).
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1435, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1107, in call_function
prediction = await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/ui.py", line 10, in create_embedding
filename = modules.textual_inversion.textual_inversion.create_embedding(name, nvpt, overwrite_old, init_text=initialization_text)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 259, in create_embedding
cond_model([""]) # will send cond model to GPU if lowvram/medvram is active
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/generative-models/sgm/modules/encoders/modules.py", line 141, in forward
emb_out = embedder(batch[embedder.input_key])
TypeError: list indices must be integers or slices, not str
Checklist
What happened?
Whenever I attempt to create a new embedding using the textual inversion feature, the process fails with a TypeError. The error indicates that list indices must be integers or slices, not str. This issue arises specifically when the forward method of an encoder module attempts to process a batch with emb_out = embedder(batch[embedder.input_key]).
Steps to reproduce the problem
What should have happened?
The .pt file for the new embedding should have been created successfully without any TypeError. It should pass all internal checks, similar to how old embeddings are handled.
What browsers do you use to access the UI ?
Google Chrome, Android
Sysinfo
sysinfo-2024-03-16-08-30.json
Console logs
Additional information
Running on Google Colab