hako-mikan / sd-webui-supermerger

model merge extention for stable diffusion web ui
GNU Affero General Public License v3.0
753 stars 111 forks source link

Extracting Lora from 2 models doesn't work #416

Open CasanovaSan opened 3 weeks ago

CasanovaSan commented 3 weeks ago

So i tried to extract a lora from a pony merge using the lora extract thing

And i got this error

Calculating sha256 for C:\AI\StableDif\Packages\Stable Diffusion WebUI\models\Stable-diffusion\sd\CassyCartoonV4.4.fp16.safetensors: 1c3578a90aa563a9ee0f0607ab52e19847e2aa03753f30596f08c7530f6c4423
Loading weights [1c3578a90a] from C:\AI\StableDif\Packages\Stable Diffusion WebUI\models\Stable-diffusion\sd\CassyCartoonV4.4.fp16.safetensors
Creating model from config: C:\AI\StableDif\Packages\Stable Diffusion WebUI\repositories\generative-models\configs\inference\sd_xl_base.yaml
Applying attention optimization: xformers... done.
Traceback (most recent call last):
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\extensions\sd-webui-supermerger\scripts\mergers\pluslora.py", line 292, in makelora
    load_model(checkpoint_info)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\extensions\sd-webui-supermerger\scripts\mergers\pluslora.py", line 1570, in load_model
    sd_models.load_model(checkpoint_info)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\modules\sd_models.py", line 869, in load_model
    sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\modules\sd_models.py", line 728, in get_empty_cond
    d = sd_model.get_learned_conditioning([""])
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\modules\sd_models_xl.py", line 32, in get_learned_conditioning
    c = self.conditioner(sdxl_conds, force_zero_embeddings=['txt'] if force_zero_negative_prompt else [])
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1557, in _call_impl
    args_result = hook(self, args)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\modules\lowvram.py", line 55, in send_me_to_gpu
    module_in_gpu.to(cpu)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1160, in to
    return self._apply(convert)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
  [Previous line repeated 5 more times]
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 833, in _apply
    param_applied = fn(param)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1158, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!

Those are the settings i used, is there anything wrong i did? image