Closed jurandfantom closed 1 year ago
no issue with the arg --skip-torch-cuda-test tested with RTX 2070, win10, chrome does it work with other models? does it work if you remove the --skip-torch-cuda-test? did you test in other browser Chrome? this would provide some more insight
Me too, same error, rtx 2060, fix from creator above doesnt work.
according to this restart your pc https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7522
I was about to say sorry for late reply, but I see topic was closed just after 1 day (fells like 3) - don't be so hasty anapnoe :)
Checked all suggested things after PC restart and git pull
1) does it work with other models? Nope - same error after change model. Additionally here is log afer change one model into another (without generation, just pick from list)
Received inputs: ["Semi-real\SR_05_ProtoGen_C-V53Photorealism_ATFSilver.safetensors"] Traceback (most recent call last): File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\routes.py", line 399, in run_predict output = await app.get_blocks().process_api( File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\blocks.py", line 1297, in process_api inputs = self.preprocess_data(fn_index, inputs, state) File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\blocks.py", line 1133, in preprocess_data self.validate_inputs(fn_index, inputs) File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\blocks.py", line 1120, in validate_inputs raise ValueError( ValueError: An event handler didn't receive enough input values (needed: 2, got: 1). Check if the event handler calls a Javascript function, and make sure its return value is correct. Wanted inputs: [dropdown, label] Received inputs: ["Test\edgeOfRealism_edgeOfRealismNOVAE.safetensors"]
2) does it work if you remove the --skip-torch-cuda-test?
venv "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\Scripts\Python.exe"
Python 3.10.9 | packaged by conda-forge | (main, Jan 11 2023, 15:15:40) [MSC v.1916 64 bit (AMD64)]
Commit hash: c647b27a5005d9ce8bac5d7776ef9374c28890b8
Traceback (most recent call last):
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\launch.py", line 352, in
Press any key to continue . . .
3) did you test in other browser Chrome? Firefox, Chrome, Opera - same situation.
Response is as follow:
venv "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\Scripts\Python.exe" Python 3.10.9 | packaged by conda-forge | (main, Jan 11 2023, 15:15:40) [MSC v.1916 64 bit (AMD64)] Commit hash: c647b27a5005d9ce8bac5d7776ef9374c28890b8 Installing requirements Launching Web UI with arguments: --skip-torch-cuda-test Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled No module 'xformers'. Proceeding without it. Loading weights [0f79788993] from E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\models\Stable-diffusion\Realistic\R_02_EldenRing-v3-pruned.safetensors Creating model from config: E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\configs\v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Applying cross attention optimization (InvokeAI). Textual inversion embeddings loaded(0): Model loaded in 71.2s (load weights from disk: 2.1s, create model: 1.0s, apply weights to model: 67.2s, apply half(): 0.7s). Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 114.4s (import torch: 10.8s, import gradio: 7.1s, import ldm: 3.0s, other imports: 6.2s, list SD models: 3.8s, setup codeformer: 0.4s, list builtin upscalers: 0.1s, load scripts: 7.8s, load SD checkpoint: 71.4s, create ui: 3.5s, gradio launch: 0.4s).
Traceback (most recent call last):
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\routes.py", line 399, in run_predict
output = await app.get_blocks().process_api(
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\blocks.py", line 1297, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\blocks.py", line 1133, in preprocess_data
self.validate_inputs(fn_index, inputs)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\blocks.py", line 1120, in validate_inputs
raise ValueError(
ValueError: An event handler (f) didn't receive enough input values (needed: 176, got: 0).
Check if the event handler calls a Javascript function, and make sure its return value is correct.
Wanted inputs:
[checkbox, textbox, textbox, checkbox, checkbox, textbox, checkbox, checkbox, checkbox, slider, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, slider, checkbox, checkbox, number, number, number, checkbox, checkbox, checkbox, checkbox, textbox, checkbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, textbox, checkbox, checkbox, checkbox, textbox, slider, slider, slider, checkboxgroup, dropdown, slider, slider, slider, checkbox, slider, slider, radio, slider, checkbox, checkbox, slider, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, textbox, textbox, number, number, checkbox, checkbox, checkbox, number, label, slider, slider, dropdown, checkbox, slider, slider, checkbox, checkbox, colorpicker, checkbox, checkbox, checkbox, slider, slider, checkbox, number, checkboxgroup, radio, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, slider, slider, slider, number, checkboxgroup, slider, checkbox, checkbox, checkbox, textbox, slider, textbox, checkbox, slider, slider, checkbox, dropdown, dropdown, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, checkbox, textbox, checkbox, checkbox, checkbox, checkbox, checkbox, slider, slider, textbox, textbox, dropdown, textbox, textbox, textbox, radio, textbox, radio, checkbox, checkbox, dropdown, dropdown, checkbox, checkbox, checkbox, slider, radio, radio, number, radio, checkboxgroup, slider, slider, radio, slider, slider, slider, slider, number, checkbox, radio, radio, slider, checkbox, dropdown, dropdown, slider, label, label, label, label]
Received inputs:
[]
Traceback (most recent call last):
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\routes.py", line 399, in run_predict
output = await app.get_blocks().process_api(
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\blocks.py", line 1297, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\blocks.py", line 1133, in preprocess_data
self.validate_inputs(fn_index, inputs)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\blocks.py", line 1120, in validate_inputs
raise ValueError(
ValueError: An event handler didn't receive enough input values (needed: 1, got: 0).
Check if the event handler calls a Javascript function, and make sure its return value is correct.
Wanted inputs:
[dropdown]
Received inputs:
[]
Traceback (most recent call last):
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\routes.py", line 399, in run_predict
output = await app.get_blocks().process_api(
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\blocks.py", line 1297, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\blocks.py", line 1133, in preprocess_data
self.validate_inputs(fn_index, inputs)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\blocks.py", line 1120, in validate_inputs
raise ValueError(
ValueError: An event handler (f) didn't receive enough input values (needed: 13, got: 0).
Check if the event handler calls a Javascript function, and make sure its return value is correct.
Wanted inputs:
[label, dropdown, dropdown, dropdown, radio, slider, checkbox, textbox, radio, radio, dropdown, textbox, checkbox]
Received inputs:
[]
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 428, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\fastapi\applications.py", line 273, in call
await super().call(scope, receive, send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in call
raise exc
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in call
await self.app(scope, receive, _send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in call
await responder(scope, receive, send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in call
await self.app(scope, receive, self.send_with_gzip)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\routing.py", line 718, in call
await route.handle(scope, receive, send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\fastapi\routing.py", line 237, in app
raw_response = await run_endpoint_function(
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function
return await dependant.call(values)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\routes.py", line 465, in predict
if app.get_blocks().dependencies[fn_index_inferred]["cancels"]:
IndexError: list index out of range
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 428, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\fastapi\applications.py", line 273, in call
await super().call(scope, receive, send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in call
raise exc
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in call
await self.app(scope, receive, _send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in call
await responder(scope, receive, send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in call
await self.app(scope, receive, self.send_with_gzip)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\routing.py", line 718, in call
await route.handle(scope, receive, send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\fastapi\routing.py", line 237, in app
raw_response = await run_endpoint_function(
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function
return await dependant.call(values)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\gradio\routes.py", line 465, in predict
if app.get_blocks().dependencies[fn_index_inferred]["cancels"]:
IndexError: list index out of range
Error completing request
Arguments: ('task(ij2u17557vqhdgz)', 'green man', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
Traceback (most recent call last):
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\modules\call_queue.py", line 57, in f
res = list(func(*args, kwargs))
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\modules\call_queue.py", line 37, in f
res = func(*args, kwargs)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\modules\txt2img.py", line 56, in txt2img
processed = process_images(p)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\modules\processing.py", line 515, in process_images
res = process_images_inner(p)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\modules\processing.py", line 658, in process_images_inner
uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps step_multiplier, cached_uc)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\modules\processing.py", line 597, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\modules\prompt_parser.py", line 140, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\modules\sd_hijack_clip.py", line 229, in forward
z = self.process_tokens(tokens, multipliers)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\modules\sd_hijack_clip.py", line 254, in process_tokens
z = self.encode_with_transformers(tokens)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\modules\sd_hijack_clip.py", line 302, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, kwargs)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
return self.text_model(
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, kwargs)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward
encoder_outputs = self.encoder(
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward
layer_outputs = encoder_layer(
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 378, in forward
hidden_states = self.layer_norm1(hidden_states)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
return F.layer_norm(
File "E:\Magazyn\Grafika\AI\stable-diffusion-webui-ux\venv\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
issues can be reopen can you put --skip-torch-cuda-test --no-half in command args? it seems that for some reason all params are empty this is rather strange issue
you can also check the latest commit and report if something has change
close this one cant reproduce
confirmed - all works now. (sorry, totally forgot to check things when you ask me ) - great job
Is there an existing issue for this?
What happened?
Hi there, to be honest I wish to provide more informations, steps and anything, but I have no clue about what could be even wrong here. Windows 10, RTX 4090 Just git clone repo, installed it, used "--skip-torch-cuda-test" as it was suggested by installation, and run UX as it should (looks great!). Only thing that was done from here, was pick realistic vision 2 model and use prompt - then just click generate.
Steps to reproduce the problem
What should have happened?
.
Commit where the problem happens
https://github.com/anapnoe/stable-diffusion-webui-ux/commit/e2d23b46f41dadae72e01630dbe89a79bd2dbc5d
What platforms do you use to access the UI ?
Windows
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
List of extensions
No
Console logs
Additional information
No response