Closed AlonAshken closed 1 year ago
A lot of other aspects of the webui don't really account for SDXL yet. Set your model to a non-SDXL model and it should work.
Yeah i tried another model now and it was fine. Bug should stay open with the label of sdxl though right?
I'm leaving it open but IIRC Auto has suggested training be done with https://github.com/kohya-ss/sd-scripts now (it supports embedding training as well). I believe he also mentioned at some point removing the training tab entirely but I could be mistaken.
Alright I'll try that, thanks 😊
For reference, found the comment auto left, here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/11857#discussioncomment-6480804
Closing as this is a dup of https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12165
Kohya does not work for SDXL either. Their Bug #1577
Is there an existing issue for this?
What happened?
When clicking to create an embedding on train tab I get the following error: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 422, in run_predict output = await app.get_blocks().process_api( File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1323, in process_api result = await self.call_function( File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1051, in call_function prediction = await anyio.to_thread.run_sync( File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, args) File "/content/stable-diffusion-webui/modules/textual_inversion/ui.py", line 11, in create_embedding filename = modules.textual_inversion.textual_inversion.create_embedding(name, nvpt, overwrite_old, init_text=initialization_text) File "/content/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 283, in create_embedding cond_model([""]) # will send cond model to GPU if lowvram/medvram is active File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, **kwargs) File "/content/stable-diffusion-webui/repositories/generative-models/sgm/modules/encoders/modules.py", line 141, in forward emb_out = embedder(batch[embedder.input_key]) TypeError: list indices must be integers or slices, not str
This happens both when running on colab pro or when running locally. I'm using the most recent version of automatic1111 (v1.5.1) Attached image showing how to reproduce, this happens no matter which model is chosen of course.
Steps to reproduce the problem
What should have happened?
A new textual inversion embedding should have been created.
Version or Commit where the problem happens
Version: v1.5.1 Commit hash: 68f336b
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows, Other/Cloud
What device are you running WebUI on?
Nvidia GPUs (RTX 20 above), Other GPUs
Cross attention optimization
Automatic
What browsers do you use to access the UI ?
Microsoft Edge
Command Line Arguments
List of extensions
deforum, dreambooth, controlnet I tried also without extentions.
Console logs
Additional information
This worked fine in the past.