Open MercedesBends opened 1 year ago
Please provide the full console output.
Hello semjon00 - thank you for replying. Where do I output the console? I can't find any mention of logs etc.
When you start the WebUI, it should show you the link you should enter/click to get to the interface (with tabs, etc.). This link is a part of console output. Please try to hit generate, get an error, and then copy-paste everything from the console output here.
So this is what I have in front of me- I see nothing like a console or log that I can cut or paste. Thanks for the reply. Much appreciated.
When you start the WebUI, a black box with text appears. I think you will see it once you click here. The text in the box (console) describes some of the inner events happening with the WebUI and the plugin - and is sometimes very insightful.
Apologies - I forgot the cmd window was part of SD. I've booted it and done one depth attempt in SD - which crashed.
remote: Enumerating objects: 5, done. remote: Counting objects: 100% (5/5), done. remote: Total 5 (delta 4), reused 5 (delta 4), pack-reused 0 Unpacking objects: 100% (5/5), 873 bytes | 67.00 KiB/s, done. From https://github.com/AUTOMATIC1111/stable-diffusion-webui
Launching Web UI with arguments: No module 'xformers'. Proceeding without it. Loading weights [cc6cb27103] from C:\Users\auser\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.ckpt Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 16.6s (import torch: 4.5s, import gradio: 2.7s, import ldm: 1.0s, other imports: 2.4s, setup codeformer: 0.1s, load scripts: 5.0s, create ui: 0.6s, gradio launch: 0.2s).
preload_extensions_git_metadata for 9 extensions took 0.58s
Creating model from config: C:\Users\auser\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying attention optimization: Doggettx... done.
Textual inversion embeddings loaded(0):
Model loaded in 7.0s (load weights from disk: 4.0s, create model: 0.5s, apply weights to model: 0.7s, apply half(): 0.7s, move model to device: 1.0s).
DepthMap v0.3.12 (394ffa7b) device: cuda Loading model weights from ./models/leres/res101.pth initialize network with normal loading the model from ./models/pix2pix\latest_net_G.pth Computing depthmap(s) .. 0%| | 0/1 [00:00<?, ?it/s]
wholeImage being processed in : 672 Adjust factor is: 1.5503875968992247 Selecting patches ... Target resolution: (2084, 2067, 3) Resulting depthmap resolution will be : (381, 378) patches to process: 7 processing patch 0 / 6 | [ 39 39 334 334] processing patch 1 / 6 | [ 31 154 227 227] processing patch 2 / 6 | [ 92 154 227 227] processing patch 3 / 6 | [158 97 218 218] processing patch 4 / 6 | [158 158 218 218] processing patch 5 / 6 | [ 0 184 164 164] processing patch 6 / 6 | [184 61 164 164] 100%|████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:13<00:00, 13.56s/it] Done. All done. Traceback (most recent call last): File "C:\Users\auser\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict output = await app.get_blocks().process_api( File "C:\Users\auser\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1326, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "C:\Users\auser\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1260, in postprocess_data prediction_value = block.postprocess(prediction_value) File "C:\Users\auser\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 4461, in postprocess file = self.pil_to_temp_file(img, dir=self.DEFAULT_TEMP_DIR) File "C:\Users\auser\stable-diffusion-webui\modules\ui_tempdir.py", line 55, in save_pil_to_file file_obj = tempfile.NamedTemporaryFile(delete=False, suffix=".png", dir=dir) File "C:\Users\auser\AppData\Local\Programs\Python\Python310\lib\tempfile.py", line 559, in NamedTemporaryFile file = _io.open(dir, mode, buffering=buffering, File "C:\Users\auser\AppData\Local\Programs\Python\Python310\lib\tempfile.py", line 556, in opener fd, name = _mkstemp_inner(dir, prefix, suffix, flags, output_type) File "C:\Users\auser\AppData\Local\Programs\Python\Python310\lib\tempfile.py", line 256, in _mkstemp_inner fd = _os.open(file, flags, 0o600) FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\auser\AppData\Local\Temp\gradio\tmpax1ife8o.png'
😢 Thinking...
Haha. That emoji made me laugh. Thank you for the help.
Any recommendations? Reinstall the whole thing? Just a part of it?
@MercedesBends I think this is not specific to the plugin. Please see this: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/11040 The main suggestion is to manually create this folder: 'C:\Users\auser\AppData\Local\Temp\gradio\'.
Thank you! Much appreciated I'll give that a go.
Tried different size of input images (including the examples at the bottom of the page) nothing seems to work. It calculates first then I just get an "Error" message. Tried CPU and GPU. Tried different models. I'm out of ideas. Other features of the install work. Any help welcome. Thank you.