anapnoe / stable-diffusion-webui-ux

Stable Diffusion web UI UX
GNU Affero General Public License v3.0
970 stars 58 forks source link

[Bug]: NameError: name 'short_commit' is not defined #139

Closed evilalmus closed 1 year ago

evilalmus commented 1 year ago

Is there an existing issue for this?

What happened?

Getting this error on install attempt (Windows)

Traceback (most recent call last):
  File "F:\AI_Image_Gen\stable-diffusion-webui-ux\launch.py", line 370, in <module>
    start()
  File "F:\AI_Image_Gen\stable-diffusion-webui-ux\launch.py", line 365, in start
    webui.webui()
  File "F:\AI_Image_Gen\stable-diffusion-webui-ux\webui.py", line 298, in webui
    shared.demo = modules.ui.create_ui()
  File "F:\AI_Image_Gen\stable-diffusion-webui-ux\modules\ui.py", line 1805, in create_ui
    footer = footer.format(versions=versions_html())
  File "F:\AI_Image_Gen\stable-diffusion-webui-ux\modules\ui.py", line 2078, in versions_html
    <li><span>commit: <a href="https://github.com/anapnoe/stable-diffusion-webui-ux/commit/{commit}"></span>{short_commit}</a></li>
NameError: name 'short_commit' is not defined

Steps to reproduce the problem

Follow the install instructions:

  1. Install Python 3.10.6 (Newer version of Python does not support torch), checking "Add Python to PATH".
  2. Install git.
  3. Download the stable-diffusion-webui-ux repository, for example by running git clone https://github.com/anapnoe/stable-diffusion-webui-ux.git.
  4. Run webui-user.bat from Windows Explorer as normal, non-administrator, user.

Error happens after "Model loaded in 6.3s (load weights from disk: 3.4s, create model: 0.2s, apply weights to model: 0.8s, move model to device: 1.8s)."

What should have happened?

WebUI should finish loading

Commit where the problem happens

Commit hash: a1a0b7416004e2f1d24a47ad9318405452e3383c

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--xformers 
--no-half   
--no-half-vae 
--precision full 
--api 
--listen 
--ckpt-dir F:\\AI_Image_Gen\\models\\Stable-diffusion\\ 
--vae-dir F:\\AI_Image_Gen\\models\\VAE\\ 
--codeformer-models-path F:\\AI_Image_Gen\\models\\Codeformer\\ 
--gfpgan-models-path F:\\AI_Image_Gen\\models\\GFPGAN\\ 
--esrgan-models-path F:\\AI_Image_Gen\\models\\ESRGAN\\ 
--bsrgan-models-path F:\\AI_Image_Gen\\models\\BSRGAN\\ 
--realesrgan-models-path F:\\AI_Image_Gen\\models\\RealESRGAN\\ 
--scunet-models-path F:\\AI_Image_Gen\\models\\ScuNET\\ 
--swinir-models-path F:\\AI_Image_Gen\\models\\SwinIR\\ 
--ldsr-models-path F:\\AI_Image_Gen\\models\\LDSR\\ 
--lora-dir F:\\AI_Image_Gen\\models\\Lora\\ 
--hypernetwork-dir F:\\AI_Image_Gen\\models\\hypernetworks\\

List of extensions

No

Console logs

F:\AI_Image_Gen\stable-diffusion-webui-ux>webui-user.bat
Creating venv in directory F:\AI_Image_Gen\stable-diffusion-webui-ux\venv using python "C:\Users\evila\AppData\Local\Programs\Python\Python310\python.exe"
venv "F:\AI_Image_Gen\stable-diffusion-webui-ux\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: <none>
Commit hash: a1a0b7416004e2f1d24a47ad9318405452e3383c
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu118
Collecting torch==2.0.1
  Downloading https://download.pytorch.org/whl/cu118/torch-2.0.1%2Bcu118-cp310-cp310-win_amd64.whl (2619.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.6/2.6 GB 1.9 MB/s eta 0:00:00
Collecting torchvision==0.15.2
  Downloading https://download.pytorch.org/whl/cu118/torchvision-0.15.2%2Bcu118-cp310-cp310-win_amd64.whl (4.9 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.9/4.9 MB 105.0 MB/s eta 0:00:00
Collecting typing-extensions
  Using cached typing_extensions-4.5.0-py3-none-any.whl (27 kB)
Collecting sympy
  Using cached sympy-1.12-py3-none-any.whl (5.7 MB)
Collecting filelock
  Using cached filelock-3.12.0-py3-none-any.whl (10 kB)
Collecting networkx
  Using cached networkx-3.1-py3-none-any.whl (2.1 MB)
Collecting jinja2
  Using cached https://download.pytorch.org/whl/Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting numpy
  Using cached numpy-1.24.3-cp310-cp310-win_amd64.whl (14.8 MB)
Collecting pillow!=8.3.*,>=5.3.0
  Using cached Pillow-9.5.0-cp310-cp310-win_amd64.whl (2.5 MB)
Collecting requests
  Using cached requests-2.30.0-py3-none-any.whl (62 kB)
Collecting MarkupSafe>=2.0
  Using cached https://download.pytorch.org/whl/MarkupSafe-2.1.2-cp310-cp310-win_amd64.whl (16 kB)
Collecting urllib3<3,>=1.21.1
  Using cached urllib3-2.0.2-py3-none-any.whl (123 kB)
Collecting charset-normalizer<4,>=2
  Using cached charset_normalizer-3.1.0-cp310-cp310-win_amd64.whl (97 kB)
Collecting certifi>=2017.4.17
  Using cached certifi-2023.5.7-py3-none-any.whl (156 kB)
Collecting idna<4,>=2.5
  Using cached https://download.pytorch.org/whl/idna-3.4-py3-none-any.whl (61 kB)
Collecting mpmath>=0.19
  Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision
Successfully installed MarkupSafe-2.1.2 certifi-2023.5.7 charset-normalizer-3.1.0 filelock-3.12.0 idna-3.4 jinja2-3.1.2 mpmath-1.3.0 networkx-3.1 numpy-1.24.3 pillow-9.5.0 requests-2.30.0 sympy-1.12 torch-2.0.1+cu118 torchvision-0.15.2+cu118 typing-extensions-4.5.0 urllib3-2.0.2

[notice] A new release of pip available: 22.2.1 -> 23.1.2
[notice] To update, run: F:\AI_Image_Gen\stable-diffusion-webui-ux\venv\Scripts\python.exe -m pip install --upgrade pip
Installing gfpgan
Installing clip
Installing open_clip
Installing xformers
Collecting xformers==0.0.17
  Using cached xformers-0.0.17-cp310-cp310-win_amd64.whl (112.6 MB)
Installing collected packages: xformers
Successfully installed xformers-0.0.17

[notice] A new release of pip available: 22.2.1 -> 23.1.2
[notice] To update, run: F:\AI_Image_Gen\stable-diffusion-webui-ux\venv\Scripts\python.exe -m pip install --upgrade pip
Cloning Stable Diffusion into F:\AI_Image_Gen\stable-diffusion-webui-ux\repositories\stable-diffusion-stability-ai...
Cloning Taming Transformers into F:\AI_Image_Gen\stable-diffusion-webui-ux\repositories\taming-transformers...
Cloning K-diffusion into F:\AI_Image_Gen\stable-diffusion-webui-ux\repositories\k-diffusion...
Cloning CodeFormer into F:\AI_Image_Gen\stable-diffusion-webui-ux\repositories\CodeFormer...
Cloning BLIP into F:\AI_Image_Gen\stable-diffusion-webui-ux\repositories\BLIP...
Installing requirements for CodeFormer
Installing requirements
Launching Web UI with arguments: --xformers --no-half --no-half-vae --precision full --api --listen --ckpt-dir F:\AI_Image_Gen\models\Stable-diffusion\ --vae-dir F:\AI_Image_Gen\models\VAE\ --codeformer-models-path F:\AI_Image_Gen\models\Codeformer\ --gfpgan-models-path F:\AI_Image_Gen\models\GFPGAN\ --esrgan-models-path F:\AI_Image_Gen\models\ESRGAN\ --bsrgan-models-path F:\AI_Image_Gen\models\BSRGAN\ --realesrgan-models-path F:\AI_Image_Gen\models\RealESRGAN\ --scunet-models-path F:\AI_Image_Gen\models\ScuNET\ --swinir-models-path F:\AI_Image_Gen\models\SwinIR\ --ldsr-models-path F:\AI_Image_Gen\models\LDSR\ --lora-dir F:\AI_Image_Gen\models\Lora\ --hypernetwork-dir F:\AI_Image_Gen\models\hypernetworks\
Calculating sha256 for F:\AI_Image_Gen\models\Stable-diffusion\512-base-ema.ckpt: d635794c1fedfdfa261e065370bea59c651fc9bfa65dc6d67ad29e11869a1824
Loading weights [d635794c1f] from F:\AI_Image_Gen\models\Stable-diffusion\512-base-ema.ckpt
Creating model from config: F:\AI_Image_Gen\models\Stable-diffusion\512-base-ema.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 865.91 M params.
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(4): nartfixer, nfixer, nrealfixer, style-bridal_sd2
Textual inversion embeddings skipped(36): 3d-female-cyborgs, 3nid_14, 3nid_15, aleyna-tilki_o, anime-girl_s, arcane-style-jv_s, bad-hands-5, bad_prompt_version2, caitlin-fairchild, cherrynobodysd15, cinemagic_still, cornell-box_o, cyberpunk-lucy, dollienobodysd15, dr-strange, EasyNegative, eonn, evelynnobodysd15, follynobodysd15, hd-emoji, JamieCole, jflw, kawaii-girl-plus-object, koh_amberheard, kr1st3nst-1000, maranobodysd15, nbdy-julia-sd15, ng_deepnegative_v1_75t, nixeu_s, sakimi-style, SarahLance, sewerslvt, spider-gwen, spider-gwen_o, style-bridal, VeronicaMars
Model loaded in 17.9s (calculate hash: 10.3s, load weights from disk: 1.9s, create model: 0.3s, apply weights to model: 1.0s, move model to device: 1.5s, load textual inversion embeddings: 2.7s).
Loading weights [d635794c1f] from F:\AI_Image_Gen\models\Stable-diffusion\512-base-ema.ckpt
Creating model from config: F:\AI_Image_Gen\models\Stable-diffusion\512-base-ema.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 865.91 M params.
Applying xformers cross attention optimization.
Model loaded in 6.3s (load weights from disk: 3.4s, create model: 0.2s, apply weights to model: 0.8s, move model to device: 1.8s).
Traceback (most recent call last):
  File "F:\AI_Image_Gen\stable-diffusion-webui-ux\launch.py", line 370, in <module>
    start()
  File "F:\AI_Image_Gen\stable-diffusion-webui-ux\launch.py", line 365, in start
    webui.webui()
  File "F:\AI_Image_Gen\stable-diffusion-webui-ux\webui.py", line 298, in webui
    shared.demo = modules.ui.create_ui()
  File "F:\AI_Image_Gen\stable-diffusion-webui-ux\modules\ui.py", line 1805, in create_ui
    footer = footer.format(versions=versions_html())
  File "F:\AI_Image_Gen\stable-diffusion-webui-ux\modules\ui.py", line 2078, in versions_html
    <li><span>commit: <a href="https://github.com/anapnoe/stable-diffusion-webui-ux/commit/{commit}"></span>{short_commit}</a></li>
NameError: name 'short_commit' is not defined
Press any key to continue . . .

Additional information

No response

evilalmus commented 1 year ago

Tried again with no Command Line Arguments and had the same result.

fu4dh4s4n commented 1 year ago

It was fixed with the last commit.

evilalmus commented 1 year ago

Confirmed. Thank you!