seruva19 / kubin

Web-GUI for Kandinsky text-to-image diffusion models.
175 stars 18 forks source link

lora load error #143

Closed Yidhar closed 11 months ago

Yidhar commented 1 year ago
no prior LoRA path declared
applying prior LoRA attention layers from None
Traceback (most recent call last):
  File "D:\kubin\venv\Lib\site-packages\gradio\routes.py", line 442, in run_predict
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\venv\Lib\site-packages\gradio\blocks.py", line 1392, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\venv\Lib\site-packages\gradio\blocks.py", line 1097, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\venv\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\venv\Lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "D:\kubin\venv\Lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\venv\Lib\site-packages\gradio\utils.py", line 703, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "D:\kubin\src\ui_blocks\t2i.py", line 290, in generate
    return generate_fn(params)
           ^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\src\web_gui.py", line 43, in <lambda>
    generate_fn=lambda params: kubin.model.t2i(params),
                               ^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\src\models\model_diffusers22\model_22.py", line 161, in t2i
    hooks.call(
  File "D:\kubin\src\hooks\hooks.py", line 34, in call
    hook(hook_type, **hook_info)
  File "D:\kubin\extensions/kd-networks/setup_ext.py", line 61, in on_hook
    bind_networks(
  File "D:\kubin\extensions/kd-networks\nn_tools\nn_attach.py", line 17, in bind_networks
    bind_lora(kubin, model_config, prior, decoder, params, task, networks_info["lora"])
  File "D:\kubin\extensions/kd-networks\nn_tools\nn_attach.py", line 56, in bind_lora
    apply_lora_to_prior(kubin, lora_prior_path, prior)
  File "D:\kubin\extensions/kd-networks\nn_tools\nn_attach.py", line 94, in apply_lora_to_prior
    lora_model = load_model_from_path(lora_prior_path)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\extensions/kd-networks\file_tools.py", line 20, in load_model_from_path
    file_extension = os.path.splitext(path)[1].lstrip(".")
                     ^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen ntpath>", line 232, in splitext
TypeError: expected str, bytes or os.PathLike object, not NoneType

Do I need to enable both the prior lora and the decoder lora for lora to work properly?

seruva19 commented 1 year ago

Yes, Kandinsky LoRA includes two files, so both paths should be filled.

Yidhar commented 1 year ago

Yes, Kandinsky LoRA includes two files, so both paths should be filled.

oky,new error.After I have loaded lora and finished sampling once, the following error occurs when the image is generated again

Yidhar commented 1 year ago
Traceback (most recent call last):
  File "D:\kubin\venv\Lib\site-packages\gradio\routes.py", line 442, in run_predict
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\venv\Lib\site-packages\gradio\blocks.py", line 1392, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\venv\Lib\site-packages\gradio\blocks.py", line 1097, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\venv\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\venv\Lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "D:\kubin\venv\Lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\venv\Lib\site-packages\gradio\utils.py", line 703, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "D:\kubin\src\ui_blocks\t2i.py", line 290, in generate
    return generate_fn(params)
           ^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\src\web_gui.py", line 43, in <lambda>
    generate_fn=lambda params: kubin.model.t2i(params),
                               ^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\src\models\model_diffusers22\model_22.py", line 153, in t2i
    prior, decoder = self.prepareModel(task)
                     ^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\src\models\model_diffusers22\model_22.py", line 89, in prepareModel
    prior, decoder = prepare_weights_for_task(self, task)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\kubin\src\models\model_diffusers22\model_22_init.py", line 233, in prepare_weights_for_task
    current_decoder.disable_xformers_memory_efficient_attention()
  File "D:\kubin\venv\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1647, in disable_xformers_memory_efficient_attention
    self.set_use_memory_efficient_attention_xformers(False)
  File "D:\kubin\venv\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1667, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "D:\kubin\venv\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1657, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "D:\kubin\venv\Lib\site-packages\diffusers\models\modeling_utils.py", line 227, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "D:\kubin\venv\Lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "D:\kubin\venv\Lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "D:\kubin\venv\Lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "D:\kubin\venv\Lib\site-packages\diffusers\models\modeling_utils.py", line 220, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "D:\kubin\venv\Lib\site-packages\diffusers\models\attention_processor.py", line 259, in set_use_memory_efficient_attention_xformers
    processor.load_state_dict(self.processor.state_dict())
  File "D:\kubin\venv\Lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LoRAAttnProcessor2_0:
        Unexpected key(s) in state_dict: "add_k_proj_lora.down.weight", "add_k_proj_lora.up.weight", "add_v_proj_lora.down.weight", "add_v_proj_lora.up.weight".
Yidhar commented 1 year ago

Yes, Kandinsky LoRA includes two files, so both paths should be filled.

oky,new error.After I have loaded lora and finished sampling once, the following error occurs when the image is generated again

I had to release all the models and reload them in order to sample the images again