AbdBarho / stable-diffusion-webui-docker

Easy Docker setup for Stable Diffusion with user-friendly UI
Other
6.69k stars 1.12k forks source link

Training and using a hypernetwork broke the WebUI container #318

Closed Pyroglyph closed 1 year ago

Pyroglyph commented 1 year ago

Has this issue been opened before?

Describe the bug

I trained a hypernetwork, used it once to generate one image. On the 2nd generation it took a long time and eventually crashed the container. Now when I try to launch the container (command: docker compose --profile auto up --build --remove-orphans) I get this:

webui-docker-auto-1  | Mounted .cache
webui-docker-auto-1  | Mounted LDSR
webui-docker-auto-1  | Mounted BLIP
webui-docker-auto-1  | Mounted Hypernetworks
webui-docker-auto-1  | Mounted VAE
webui-docker-auto-1  | Mounted GFPGAN
webui-docker-auto-1  | Mounted RealESRGAN
webui-docker-auto-1  | Mounted Deepdanbooru
webui-docker-auto-1  | Mounted ScuNET
webui-docker-auto-1  | Mounted .cache
webui-docker-auto-1  | Mounted StableDiffusion
webui-docker-auto-1  | Mounted embeddings
webui-docker-auto-1  | Mounted ESRGAN
webui-docker-auto-1  | Mounted config.json
webui-docker-auto-1  | Mounted SwinIR
webui-docker-auto-1  | Mounted Lora
webui-docker-auto-1  | Mounted MiDaS
webui-docker-auto-1  | Mounted ui-config.json
webui-docker-auto-1  | Mounted BSRGAN
webui-docker-auto-1  | Mounted Codeformer
webui-docker-auto-1  | Mounted extensions
webui-docker-auto-1  | ++ nproc
webui-docker-auto-1  | + accelerate launch --num_cpu_threads_per_process=16 webui.py --listen --port 7860 --allow-code --medvram --xformers --enable-insecure-extension-access --api
webui-docker-auto-1  | 2023-02-01 23:27:47.646841: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
webui-docker-auto-1  | To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
webui-docker-auto-1  | 2023-02-01 23:27:48.287397: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
webui-docker-auto-1  | 2023-02-01 23:27:48.287482: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
webui-docker-auto-1  | 2023-02-01 23:27:48.287489: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
webui-docker-auto-1  | The following values were not passed to `accelerate launch` and had defaults used instead:
webui-docker-auto-1  |  `--num_processes` was set to a value of `1`
webui-docker-auto-1  |  `--num_machines` was set to a value of `1`
webui-docker-auto-1  |  `--mixed_precision` was set to a value of `'no'`
webui-docker-auto-1  |  `--dynamo_backend` was set to a value of `'no'`
webui-docker-auto-1  | To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
webui-docker-auto-1  | Traceback (most recent call last):
webui-docker-auto-1  |   File "/stable-diffusion-webui/webui.py", line 15, in <module>
webui-docker-auto-1  |     from modules import import_hook, errors, extra_networks, ui_extra_networks_checkpoints
webui-docker-auto-1  |   File "/stable-diffusion-webui/modules/ui_extra_networks_checkpoints.py", line 6, in <module>
webui-docker-auto-1  |     from modules import shared, ui_extra_networks, sd_models
webui-docker-auto-1  |   File "/stable-diffusion-webui/modules/shared.py", line 12, in <module>
webui-docker-auto-1  |     import modules.interrogate
webui-docker-auto-1  |   File "/stable-diffusion-webui/modules/interrogate.py", line 15, in <module>
webui-docker-auto-1  |     from modules import devices, paths, shared, lowvram, modelloader, errors
webui-docker-auto-1  |   File "/stable-diffusion-webui/modules/modelloader.py", line 7, in <module>
webui-docker-auto-1  |     from basicsr.utils.download_util import load_file_from_url
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/basicsr/__init__.py", line 3, in <module>
webui-docker-auto-1  |     from .archs import *
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/basicsr/archs/__init__.py", line 5, in <module>
webui-docker-auto-1  |     from basicsr.utils import get_root_logger, scandir
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/basicsr/utils/__init__.py", line 4, in <module>
webui-docker-auto-1  |     from .img_process_util import USMSharp, usm_sharp
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/basicsr/utils/img_process_util.py", line 1, in <module>
webui-docker-auto-1  |     import cv2
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/cv2/__init__.py", line 181, in <module>
webui-docker-auto-1  |     bootstrap()
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/cv2/__init__.py", line 153, in bootstrap
webui-docker-auto-1  |     native_module = importlib.import_module("cv2")
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
webui-docker-auto-1  |     return _bootstrap._gcd_import(name[level:], package, level)
webui-docker-auto-1  | ImportError: libGL.so.1: cannot open shared object file: No such file or directory
webui-docker-auto-1  | Traceback (most recent call last):
webui-docker-auto-1  |   File "/usr/local/bin/accelerate", line 8, in <module>
webui-docker-auto-1  |     sys.exit(main())
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
webui-docker-auto-1  |     args.func(args)
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1104, in launch_command
webui-docker-auto-1  |     simple_launcher(args)
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/accelerate/commands/launch.py", line 567, in simple_launcher
webui-docker-auto-1  |     raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
webui-docker-auto-1  | subprocess.CalledProcessError: Command '['/usr/local/bin/python', 'webui.py', '--listen', '--port', '7860', '--allow-code', '--medvram', '--xformers', '--enable-insecure-extension-access', '--api']' returned non-zero exit status 1.
webui-docker-auto-1 exited with code 1

and then the container closes.

I have generated thousands of images without fail prior to attempting to use hypernetworks, so it's not like my hardware or setup method is the issue.

I've tried re-running the download container just in case that fixes things, but it didn't seem to work.

Which UI

auto

Hardware / Software

AbdBarho commented 1 year ago

this is probably one of your extensions, there should be no tensorflow in the container whatsoever.

you can try adding this to your startup.sh

pip install --upgrade --force-reinstall opencv-python-headless
github-actions[bot] commented 1 year ago

This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.

github-actions[bot] commented 1 year ago

This issue was closed because it has been stalled for 7 days with no activity.