AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
141.54k stars 26.75k forks source link

[Bug]: Exception when trying to create an embedding on Train tab #12344

Closed AlonAshken closed 1 year ago

AlonAshken commented 1 year ago

Is there an existing issue for this?

What happened?

When clicking to create an embedding on train tab I get the following error: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 422, in run_predict output = await app.get_blocks().process_api( File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1323, in process_api result = await self.call_function( File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1051, in call_function prediction = await anyio.to_thread.run_sync( File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, args) File "/content/stable-diffusion-webui/modules/textual_inversion/ui.py", line 11, in create_embedding filename = modules.textual_inversion.textual_inversion.create_embedding(name, nvpt, overwrite_old, init_text=initialization_text) File "/content/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 283, in create_embedding cond_model([""]) # will send cond model to GPU if lowvram/medvram is active File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, **kwargs) File "/content/stable-diffusion-webui/repositories/generative-models/sgm/modules/encoders/modules.py", line 141, in forward emb_out = embedder(batch[embedder.input_key]) TypeError: list indices must be integers or slices, not str

This happens both when running on colab pro or when running locally. I'm using the most recent version of automatic1111 (v1.5.1) Attached image showing how to reproduce, this happens no matter which model is chosen of course. image

Steps to reproduce the problem

  1. Go to train tab
  2. Press put some word on the keyword field, leave * on initialization text (or write something, it will still happen)
  3. click create
  4. Nothing will happen but on the cmd windows / colab window you'll see the exception.

What should have happened?

A new textual inversion embedding should have been created.

Version or Commit where the problem happens

Version: v1.5.1 Commit hash: 68f336b

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows, Other/Cloud

What device are you running WebUI on?

Nvidia GPUs (RTX 20 above), Other GPUs

Cross attention optimization

Automatic

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

I tried with and without --xformers.

List of extensions

deforum, dreambooth, controlnet I tried also without extentions.

Console logs

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1323, in process_api
    result = await self.call_function(
  File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1051, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/content/stable-diffusion-webui/modules/textual_inversion/ui.py", line 11, in create_embedding
    filename = modules.textual_inversion.textual_inversion.create_embedding(name, nvpt, overwrite_old, init_text=initialization_text)
  File "/content/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 283, in create_embedding
    cond_model([""])  # will send cond model to GPU if lowvram/medvram is active
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/content/stable-diffusion-webui/repositories/generative-models/sgm/modules/encoders/modules.py", line 141, in forward
    emb_out = embedder(batch[embedder.input_key])
TypeError: list indices must be integers or slices, not str

Additional information

This worked fine in the past.

catboxanon commented 1 year ago

A lot of other aspects of the webui don't really account for SDXL yet. Set your model to a non-SDXL model and it should work.

AlonAshken commented 1 year ago

Yeah i tried another model now and it was fine. Bug should stay open with the label of sdxl though right?

catboxanon commented 1 year ago

I'm leaving it open but IIRC Auto has suggested training be done with https://github.com/kohya-ss/sd-scripts now (it supports embedding training as well). I believe he also mentioned at some point removing the training tab entirely but I could be mistaken.

AlonAshken commented 1 year ago

Alright I'll try that, thanks 😊

catboxanon commented 1 year ago

For reference, found the comment auto left, here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/11857#discussioncomment-6480804

catboxanon commented 1 year ago

Closing as this is a dup of https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12165

hughdidit commented 10 months ago

Kohya does not work for SDXL either. Their Bug #1577