Open sololll opened 9 months ago
same here
There are a few things I'd like you to check:
same, using latest version installed by git clone
RTX4090, CUDA12.2, driver version 535.146.02, 16 vCPU Intel(R) Xeon(R) Gold 6430, 120GB RAM, from AutoDL
Resolved, probably caused by controlnet, before this I was using stable-diffusion-webui-forge
Looks like the problem caused by Forge. Same issue with Forge and works fine on a1111 (4070Ti Super, 16 gb). No matter if 'use cuda' selected or not
When merging 2models...
Traceback (most recent call last):
File "W:\SynologyDrive\PonyXL\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "W:\SynologyDrive\PonyXL\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "W:\SynologyDrive\PonyXL\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "W:\SynologyDrive\PonyXL\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "W:\SynologyDrive\PonyXL\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "W:\SynologyDrive\PonyXL\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "W:\SynologyDrive\PonyXL\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "W:\SynologyDrive\PonyXL\extensions\sd-webui-supermerger\scripts\mergers\mergers.py", line 119, in smergegen
result,currentmodel,modelid,theta_0,metadata = smerge(
File "W:\SynologyDrive\PonyXL\extensions\sd-webui-supermerger\scripts\mergers\mergers.py", line 451, in smerge
weight_index = BLOCKIDXLL.index(blocks26)
ValueError: 'Not Merge' is not in list
When merging 3models...
Traceback (most recent call last):
File "W:\SynologyDrive\PonyXL\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "W:\SynologyDrive\PonyXL\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "W:\SynologyDrive\PonyXL\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "W:\SynologyDrive\PonyXL\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "W:\SynologyDrive\PonyXL\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "W:\SynologyDrive\PonyXL\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "W:\SynologyDrive\PonyXL\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "W:\SynologyDrive\PonyXL\extensions\sd-webui-supermerger\scripts\mergers\mergers.py", line 119, in smergegen
result,currentmodel,modelid,theta_0,metadata = smerge(
File "W:\SynologyDrive\PonyXL\extensions\sd-webui-supermerger\scripts\mergers\mergers.py", line 482, in smerge
theta_0_a =lerp(theta_0_a.to(torch.float32),lerp(theta_1[key].to(torch.float32),theta_2[key].to(torch.float32),current_beta/(current_alpha + current_beta)),current_alpha + current_beta).to(theta_0_a.dtype)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Does this error occur in the latest version of Forge and this script?
Does this error occur in the latest version of Forge and this script?
I attempted to merge the 3 models with the latest version, updated about an hour ago, but an error occurred.
Traceback (most recent call last): File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\queueing.py", line 536, in process_events response = await route_utils.call_process_api( File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\route_utils.py", line 285, in call_process_api output = await app.get_blocks().process_api( File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1923, in process_api result = await self.call_function( File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1508, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\utils.py", line 818, in wrapper response = f(args, **kwargs) File "E:\stable-diffusion-webui-forge\extensions\sd-webui-supermerger\scripts\mergers\mergers.py", line 119, in smergegen result,currentmodel,modelid,theta_0,metadata = smerge( File "E:\stable-diffusion-webui-forge\extensions\sd-webui-supermerger\scripts\mergers\mergers.py", line 518, in smerge if torch.allclose(theta_1[key].float(), theta_2[key].float(), rtol=0, atol=0): RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
May be fixed.
May be fixed.
Thanks. It no longer stops with an error right after starting. However, when the merge progresses to 56%, the process ends with the following error message.
Stage 1/2: 56%|#####################################5 | 1409/2515 [00:18<00:14, 75.87it/s] Traceback (most recent call last): File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\queueing.py", line 536, in process_events response = await route_utils.call_process_api( File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\route_utils.py", line 285, in call_process_api output = await app.get_blocks().process_api( File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1923, in process_api result = await self.call_function( File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1508, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\utils.py", line 818, in wrapper response = f(args, **kwargs) File "E:\stable-diffusion-webui-forge\extensions\sd-webui-supermerger\scripts\mergers\mergers.py", line 119, in smergegen result,currentmodel,modelid,theta_0,metadata = smerge( File "E:\stable-diffusion-webui-forge\extensions\sd-webui-supermerger\scripts\mergers\mergers.py", line 452, in smerge weight_index = BLOCKIDXLL.index(blocks26) ValueError: 'Not Merge' is not in list
The merge of the 2 models was successful in version aa82ea54. Thanks. But 3 models have failed. It would be great if you could fix it.
Traceback (most recent call last): File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\queueing.py", line 536, in process_events response = await route_utils.call_process_api( File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\route_utils.py", line 285, in call_process_api output = await app.get_blocks().process_api( File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1923, in process_api result = await self.call_function( File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1508, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "E:\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\utils.py", line 818, in wrapper response = f(args, **kwargs) File "E:\stable-diffusion-webui-forge\extensions\sd-webui-supermerger\scripts\mergers\mergers.py", line 119, in smergegen result,currentmodel,modelid,theta_0,metadata = smerge( File "E:\stable-diffusion-webui-forge\extensions\sd-webui-supermerger\scripts\mergers\mergers.py", line 521, in smerge traindiff(key,current_alpha,theta_0,theta_1,theta_2) File "E:\stable-diffusion-webui-forge\extensions\sd-webui-supermerger\scripts\mergers\mergers.py", line 742, in traindiff diff_AB = theta_1[key].float() - theta_2[key].float() RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Can I get a rollback? It is possible to merge two models, but three models are still not merged, so it is too inconvenient.
Please rollback the three-merge version.
If I just add Supermerger after installing the 1-click version of Forge and merge 3 SD1.5 models, this error will always occur. I verified it many times. VRAM is 16GB and merged with 2GB models. I sometimes used to see this error when Use CUDA is specified, but does it occur even if Use CUDA is not specified?
pythonは全然詳しくないんですがだいたいわかりました。 mergers.pyの1741行目は、 out = torch.empty(qs["shape"],device="cuda") ではなく、 out = torch.empty(qs["shape"],device=="cuda") じゃないでしょうか。
ちなみに別件ですがこれをやってもResetClipを使うと別のエラーが出ますね。。。
torchの仕様もpythonのスコープも不明なので全然的外れな変更な気がしてきましたが、ここのdeviceがローカルに動いている気がしないので一方的に書き換えた後始末なりは必要なのかと
結局ChatGPTさんの言うとおりにしました。ほかの部分で不整合が出たらそれはそのときということで
def traindiff(key, current_alpha, theta_0, theta_1, theta_2):
device = theta_1[key].device # 例えば、theta_1[key] が CUDA 上にある場合はそのデバイスを使う
# 同じデバイスに移動してから演算
theta_1_key = theta_1[key].to(device)
theta_2_key = theta_2[key].to(device)
theta_0_key = theta_0[key].to(device)
# 差分計算
diff_AB = theta_1_key.float() - theta_2_key.float()
distance_A0 = torch.abs(theta_1_key.float() - theta_2_key.float())
distance_A1 = torch.abs(theta_1_key.float() - theta_0_key.float())
sum_distances = distance_A0 + distance_A1
scale = torch.where(sum_distances != 0, distance_A1 / sum_distances, torch.tensor(0., device=device).float())
sign_scale = torch.sign(theta_1_key.float() - theta_2_key.float())
scale = sign_scale * torch.abs(scale)
new_diff = scale * torch.abs(diff_AB)
theta_0[key] = theta_0_key + (new_diff * (current_alpha * 1.8))
When 'Options->use cuda' is selected, the following error occurs during the merging process:
Traceback (most recent call last): File "/root/dd/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict | 0/1131 [00:00<?, ?it/s] output = await app.get_blocks().process_api( File "/root/dd/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1431, in process_api result = await self.call_function( File "/root/dd/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "/root/dd/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/root/dd/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "/root/dd/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, args) File "/root/dd/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper response = f(args, **kwargs) File "/root/dd/stable-diffusion-webui/extensions/sd-webui-supermerger/scripts/mergers/mergers.py", line 108, in smergegen result,currentmodel,modelid,theta_0,metadata = smerge( File "/root/dd/stable-diffusion-webui/extensions/sd-webui-supermerger/scripts/mergers/mergers.py", line 449, in smerge theta_0_a = torch.lerp(theta_0_a.to(torch.float32), theta_1[key].to(torch.float32), current_alpha).to(theta_0_a.dtype) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!