Open grigio opened 5 days ago
just to check, what happens if you start it again after the initial error? typically this is due to python libs mismatch and goes away after restart as sdnext installer fixes it, but it cannot reload some of the fixed libraries in the same pass.
I tried again with models in the folder and it seems gone, but it seems it doesn't proceed with the generation Progress ?it/s 0% 0/7 00:00 ? Base
I don't get other errors.
$ ./webui.sh --use-rocm --listen
Activate python venv: /mnt/exwin/esperimenti/automatic/venv
Launch: venv/bin/python3
13:25:14-581956 INFO Starting SD.Next
13:25:14-583534 INFO Logger: file="/mnt/exwin/esperimenti/automatic/sdnext.log" level=INFO size=109998 mode=append
13:25:14-584074 INFO Python version=3.11.2 platform=Linux bin="/mnt/exwin/esperimenti/automatic/venv/bin/python3"
venv="/mnt/exwin/esperimenti/automatic/venv"
13:25:14-593767 INFO Version: app=sd.next updated=2024-09-13 hash=e7ec07f9 branch=master url=https://github.com/vladmandic/automatic/tree/master ui=main
13:25:15-083682 INFO Platform: arch=x86_64 cpu= system=Linux release=6.1.0-25-amd64 python=3.11.2
13:25:15-084777 INFO HF cache folder: /home/g/.cache/huggingface/hub
13:25:15-088053 INFO Using CPU-only Torch
13:25:15-116932 INFO Verifying requirements
13:25:15-119661 INFO Verifying packages
13:25:15-122468 INFO Extensions: disabled=[]
13:25:15-123189 INFO Extensions: enabled=['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sdnext-modernui',
'stable-diffusion-webui-rembg'] extensions-builtin
13:25:15-124396 INFO Extensions: enabled=[] extensions
13:25:15-125078 INFO Startup: quick launch
13:25:15-125700 INFO Extensions: disabled=[]
13:25:15-126270 INFO Extensions: enabled=['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sdnext-modernui',
'stable-diffusion-webui-rembg'] extensions-builtin
13:25:15-127291 INFO Extensions: enabled=[] extensions
13:25:15-129085 INFO Command line args: ['--use-rocm', '--listen'] listen=True use_rocm=True
13:25:17-559408 INFO Load packages: {'torch': '2.4.1+cu121', 'diffusers': '0.31.0.dev0', 'gradio': '3.43.2'}
13:25:17-871562 INFO Engine: backend=Backend.DIFFUSERS compute=cpu device=cpu attention="Scaled-Dot-Product" mode=no_grad
13:25:17-872315 INFO Device:
13:25:18-041933 INFO Available VAEs: path="models/VAE" items=0
13:25:18-042750 INFO Disabled extensions: ['sdnext-modernui']
13:25:18-043718 INFO Available models: path="models/Stable-diffusion" items=13 time=0.00
13:25:18-072905 INFO LoRA networks: available=0 folders=2
13:25:18-219922 INFO Extension: script='extensions-builtin/sd-webui-agent-scheduler/scripts/task_scheduler.py' Using sqlite file:
extensions-builtin/sd-webui-agent-scheduler/task_scheduler.sqlite3
13:25:18-229017 INFO UI theme: type=Standard name="black-teal"
13:25:18-934016 INFO Extension list is empty: refresh required
13:25:19-046393 INFO Local URL: http://localhost:7860/
INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, onnx, openvino
13:25:19-458903 WARNING OpenVINO: No compatible GPU detected! Using CPU
13:25:19-495799 INFO [AgentScheduler] Task queue is empty
13:25:19-496410 INFO [AgentScheduler] Registering APIs
13:25:19-548342 INFO Setting Torch parameters: device=cpu dtype=torch.float16 vae=torch.float16 unet=torch.float16 context=no_grad fp16=True bf16=None
optimization=Scaled-Dot-Product
13:25:19-549740 INFO Select: model="juggernautXL_version3 [c4b501713f]"
13:25:19-551400 INFO Autodetect: model="Stable Diffusion XL" class=StableDiffusionXLPipeline
file="/mnt/exwin/esperimenti/automatic/models/Stable-diffusion/juggernautXL_version3.safetensors" size=6617MB
Diffusers 15.66it/s ████████ 100% 7/7 00:00 00:00 Loading pipeline components...
13:25:20-026078 INFO Load embeddings: loaded=0 skipped=0 time=0.00
13:25:20-306049 INFO Load model: time=0.49 load=0.47 native=1024 {'ram': {'used': 1.77, 'total': 46.27}}
13:25:20-307431 INFO Startup time: 5.17 torch=1.68 gradio=0.40 diffusers=0.06 libraries=0.75 extensions=0.16 ui-networks=0.07 ui-img2img=0.25
ui-control=0.07 ui-models=0.10 ui-settings=0.14 launch=0.05 api=0.29 app-started=0.21 checkpoint=0.76
13:25:46-471424 INFO Browser session: user=None client=192.168.1.78 agent=Mozilla/5.0 (X11; Linux x86_64; rv:130.0) Gecko/20100101 Firefox/130.0
13:25:49-636032 INFO MOTD: N/A
13:26:21-547759 INFO Base: class=StableDiffusionXLPipeline
Progress ?it/s 0% 0/7 00:00 ? Base
OK, I finally got an error
13:36:21-626544 WARNING OpenVINO: No compatible GPU detected! Using CPU
Progress ?it/s 0% 0/10 00:00 ? Base
13:37:20-602800 ERROR Exception: new() received an invalid combination of arguments - got (Tensor, requires_grad=bool), but expected one of:
* (*, torch.device device = None)
didn't match because some of the keywords were incorrect: requires_grad
* (torch.Storage storage)
* (Tensor other)
* (tuple of ints size, *, torch.device device = None)
* (object data, *, torch.device device = None)
13:37:20-604004 ERROR Arguments: args=('task(khw87o1lq23m0bs)', 'a dog', '', [], 10, 0, 28, True, False, False, False, 1, 1, 6, 6, 0.7, 0, 0.5, 1, 1, -1.0,
-1.0, 0, 0, 0, 512, 512, False, 0.3, 1, 1, 'Add with forward', 'None', False, 20, 0, 0, 10, 0, '', '', 0, 0, 0, 0, False, 4, 0.95,
False, 0.6, 1, '#000000', 0, [], 0, 1, 'None', 'None', 'None', 'None', 0.5, 0.5, 0.5, 0.5, None, None, None, None, False, False,
False, False, 0, 0, 0, 0, 1, 1, 1, 1, None, None, None, None, False, '', False, 0, '', [], 0, '', [], 0, '', [], False, True, False,
False, False, False, 0, 'None', [], 'FaceID Base', True, True, 1, 1, 1, 0.5, True, 'person', 1, 0.5, True, 'None', 16, 'None', 1,
True, 'None', 2, True, 1, 0, True, 'none', 3, 4, 0.25, 0.25, 'THUDM/CogVideoX-2b', 'DDIM', 49, 6, 'balanced', True, 'None', 8, True,
1, 0, None, None, 3, 1, 1, 0.8, 8, 64, True, 0.65, True, False, 1, 1, 1, True, 0.5, 600.0, 1.0, True, None, 1, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 0.5, 0.5, 'OpenGVLab/InternVL-14B-224px', False, 0.7, 1.2, 128, False, False, 'positive', 'comma', 0, False, False, '',
'None', '', 1, '', 'None', 1, True, 10, 'None', True, 0, 'None', 2, True, 1, 0, 0, '', [], 0, '', [], 0, '', [], False, True, False,
False, False, False, 0) kwargs={}
13:37:20-606650 ERROR gradio call: TypeError
╭──────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────╮
│ /mnt/exwin/esperimenti/automatic/modules/call_queue.py:31 in f │
│ │
│ 30 │ │ │ try: │
│ ❱ 31 │ │ │ │ res = func(*args, **kwargs) │
│ 32 │ │ │ │ progress.record_results(id_task, res) │
│ │
│ /mnt/exwin/esperimenti/automatic/modules/txt2img.py:93 in txt2img │
│ │
│ 92 │ if processed is None: │
│ ❱ 93 │ │ processed = processing.process_images(p) │
│ 94 │ processed = scripts.scripts_txt2img.after(p, processed, *args) │
│ │
│ /mnt/exwin/esperimenti/automatic/modules/processing.py:191 in process_images │
│ │
│ 190 │ │ │ with context_hypertile_vae(p), context_hypertile_unet(p): │
│ ❱ 191 │ │ │ │ processed = process_images_inner(p) │
│ 192 │
│ │
│ /mnt/exwin/esperimenti/automatic/modules/processing.py:312 in process_images_inner │
│ │
│ 311 │ │ │ │ │ from modules.processing_diffusers import process_diffusers │
│ ❱ 312 │ │ │ │ │ x_samples_ddim = process_diffusers(p) │
│ 313 │ │ │ │ else: │
│ │
│ /mnt/exwin/esperimenti/automatic/modules/processing_diffusers.py:108 in process_diffusers │
│ │
│ 107 │ │ else: │
│ ❱ 108 │ │ │ output = shared.sd_model(**base_args) │
│ 109 │ │ if isinstance(output, dict): │
│ │
│ ... 9 frames hidden ... │
│ │
│ /mnt/exwin/esperimenti/automatic/venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1553 in _wrapped_call_impl │
│ │
│ 1552 │ │ else: │
│ ❱ 1553 │ │ │ return self._call_impl(*args, **kwargs) │
│ 1554 │
│ │
│ /mnt/exwin/esperimenti/automatic/venv/lib/python3.11/site-packages/torch/nn/modules/module.py:1562 in _call_impl │
│ │
│ 1561 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1562 │ │ │ return forward_call(*args, **kwargs) │
│ 1563 │
│ │
│ /mnt/exwin/esperimenti/automatic/venv/lib/python3.11/site-packages/accelerate/hooks.py:171 in new_forward │
│ │
│ 170 │ │ │ output = module._old_forward(*args, **kwargs) │
│ ❱ 171 │ │ return module._hf_hook.post_forward(module, output) │
│ 172 │
│ │
│ /mnt/exwin/esperimenti/automatic/venv/lib/python3.11/site-packages/accelerate/hooks.py:376 in post_forward │
│ │
│ 375 │ │ │ ): │
│ ❱ 376 │ │ │ │ set_module_tensor_to_device(module, name, "meta") │
│ 377 │ │ │ │ if type(module).__name__ == "Linear8bitLt": │
│ │
│ /mnt/exwin/esperimenti/automatic/venv/lib/python3.11/site-packages/accelerate/utils/modeling.py:440 in set_module_tensor_to_device │
│ │
│ 439 │ │ │ else: │
│ ❱ 440 │ │ │ │ new_value = param_cls(new_value, requires_grad=old_value.requires_grad).to(device) │
│ 441 │
╰───────────────────────────────
you started with ./webui.sh --use-rocm
, but its trying to use openvino instead. seems something went wrong with install/detection of correct torch to start with.
start with ./webui.sh --use-rocm --reinstall
so correct torch gets reinstalled
Issue Description
I tried to run
./webui.sh --use-rocm
on LinuxVersion Platform Description
No response
Relevant log output
Backend
Diffusers
UI
Standard
Branch
Master
Model
StableDiffusion 1.5
Acknowledgements