AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
135.36k stars 25.85k forks source link

Create a new embedding gives error #2208

Closed abhijit4569 closed 1 year ago

abhijit4569 commented 1 year ago

Hi,

I tried following the guide to creating a new empty embedding. But I get this error in the console. Screenshot and console dump below. image

Traceback (most recent call last): File "\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 273, in run_predict output = await app.blocks.process_api( File "\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 742, in process_api result = await self.call_function(fn_index, inputs, iterator) File "\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 653, in call_function prediction = await anyio.to_thread.run_sync( File "\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, args) File "\stable-diffusion-webui\modules\textual_inversion\ui.py", line 11, in create_embedding filename = modules.textual_inversion.textual_inversion.create_embedding(name, nvpt, init_text=initialization_text) File "\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 143, in create_embedding embedded = embedding_layer.token_embedding.wrapped(ids.to(devices.device)).squeeze(0) File "\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(input, **kwargs) File "\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 158, in forward return F.embedding( File "\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2199, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)

Thanks, A.P

kPepis commented 1 year ago

From another issue: if you remove the --lowvram or --medvram flags from the startup script, this should be fixed. It worked for me.

abhijit4569 commented 1 year ago

Thanks! removing --medvram flag worked.

Master-LBK commented 1 year ago

From another issue: if you remove the --lowvram or --medvram flags from the startup script, this should be fixed. It worked for me.

where can I find the startup script? thx

kPepis commented 1 year ago

where can I find the startup script? thx

If you're on Windows, open the webui-user.bat file at the root the repo; otherwise, open the webui-user.sh file and look for a line that reads set COMMANDLINE_ARGS= or export COMMANDLINE_ARGS="". If the line has the lowvram or medvram flags, remove it.

duao0201 commented 1 year ago

@echo off

set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS=

call webui.bat

This is my startup script, I didn't see lowvram or medvram flags?

duao0201 commented 1 year ago

where can I find the startup script? thx

If you're on Windows, open the webui-user.bat file at the root the repo; otherwise, open the webui-user.sh file and look for a line that reads set COMMANDLINE_ARGS= or export COMMANDLINE_ARGS="". If the line has the lowvram or medvram flags, remove it.

@echo off

set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS=

call webui.bat

This is my startup script, I didn't see lowvram or medvram flags?

zhangge001007 commented 1 year ago

我也没有在 webui-user.bat 中 找到 lowvram 或 medvram, 有大神来说一下吗? 还有 webui-user.sh 这个文件要如何打开并修改?

Dioskurides commented 4 months ago

im having the same problem but my error is different:

Traceback (most recent call last): File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, *kwargs) File "C:\StableDiffusion\stable-diffusion-webui\modules\textual_inversion\ui.py", line 10, in create_embedding filename = modules.textual_inversion.textual_inversion.create_embedding(name, nvpt, overwrite_old, init_text=initialization_text) File "C:\StableDiffusion\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 259, in create_embedding cond_model([""]) # will send cond model to GPU if lowvram/medvram is active File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, **kwargs) File "C:\StableDiffusion\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 141, in forward emb_out = embedder(batch[embedder.input_key]) TypeError: list indices must be integers or slices, not str INFO:httpx:HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error" INFO:httpx:HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"

wocjhj commented 3 months ago

im having the same problem but my error is different:

Traceback (most recent call last): File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, *kwargs) File "C:\StableDiffusion\stable-diffusion-webui\modules\textual_inversion\ui.py", line 10, in create_embedding filename = modules.textual_inversion.textual_inversion.create_embedding(name, nvpt, overwrite_old, init_text=initialization_text) File "C:\StableDiffusion\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 259, in create_embedding cond_model([""]) # will send cond model to GPU if lowvram/medvram is active File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, **kwargs) File "C:\StableDiffusion\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 141, in forward emb_out = embedder(batch[embedder.input_key]) TypeError: list indices must be integers or slices, not str INFO:httpx:HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error" INFO:httpx:HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"

the same error