Closed abhijit4569 closed 1 year ago
From another issue: if you remove the --lowvram
or --medvram
flags from the startup script, this should be fixed. It worked for me.
Thanks! removing --medvram
flag worked.
From another issue: if you remove the
--lowvram
or--medvram
flags from the startup script, this should be fixed. It worked for me.
where can I find the startup script? thx
where can I find the startup script? thx
If you're on Windows, open the webui-user.bat
file at the root the repo; otherwise, open the webui-user.sh
file and look for a line that reads set COMMANDLINE_ARGS=
or export COMMANDLINE_ARGS=""
. If the line has the lowvram
or medvram
flags, remove it.
@echo off
set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS=
call webui.bat
This is my startup script, I didn't see lowvram or medvram flags?
where can I find the startup script? thx
If you're on Windows, open the
webui-user.bat
file at the root the repo; otherwise, open thewebui-user.sh
file and look for a line that readsset COMMANDLINE_ARGS=
orexport COMMANDLINE_ARGS=""
. If the line has thelowvram
ormedvram
flags, remove it.
@echo off
set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS=
call webui.bat
This is my startup script, I didn't see lowvram or medvram flags?
我也没有在 webui-user.bat 中 找到 lowvram 或 medvram, 有大神来说一下吗? 还有 webui-user.sh 这个文件要如何打开并修改?
im having the same problem but my error is different:
Traceback (most recent call last): File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, *kwargs) File "C:\StableDiffusion\stable-diffusion-webui\modules\textual_inversion\ui.py", line 10, in create_embedding filename = modules.textual_inversion.textual_inversion.create_embedding(name, nvpt, overwrite_old, init_text=initialization_text) File "C:\StableDiffusion\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 259, in create_embedding cond_model([""]) # will send cond model to GPU if lowvram/medvram is active File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, **kwargs) File "C:\StableDiffusion\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 141, in forward emb_out = embedder(batch[embedder.input_key]) TypeError: list indices must be integers or slices, not str INFO:httpx:HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error" INFO:httpx:HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
im having the same problem but my error is different:
Traceback (most recent call last): File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, *kwargs) File "C:\StableDiffusion\stable-diffusion-webui\modules\textual_inversion\ui.py", line 10, in create_embedding filename = modules.textual_inversion.textual_inversion.create_embedding(name, nvpt, overwrite_old, init_text=initialization_text) File "C:\StableDiffusion\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 259, in create_embedding cond_model([""]) # will send cond model to GPU if lowvram/medvram is active File "C:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, **kwargs) File "C:\StableDiffusion\stable-diffusion-webui\repositories\generative-models\sgm\modules\encoders\modules.py", line 141, in forward emb_out = embedder(batch[embedder.input_key]) TypeError: list indices must be integers or slices, not str INFO:httpx:HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error" INFO:httpx:HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
the same error
Hi,
I tried following the guide to creating a new empty embedding. But I get this error in the console. Screenshot and console dump below.![image](https://user-images.githubusercontent.com/6291230/194952575-e9cf137e-be26-4ecf-b9fb-60eaebdb630e.png)
Thanks, A.P