Closed Yidhar closed 1 year ago
Yes, Kandinsky LoRA includes two files, so both paths should be filled.
Yes, Kandinsky LoRA includes two files, so both paths should be filled.
oky,new error.After I have loaded lora and finished sampling once, the following error occurs when the image is generated again
Traceback (most recent call last):
File "D:\kubin\venv\Lib\site-packages\gradio\routes.py", line 442, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\kubin\venv\Lib\site-packages\gradio\blocks.py", line 1392, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\kubin\venv\Lib\site-packages\gradio\blocks.py", line 1097, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\kubin\venv\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\kubin\venv\Lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "D:\kubin\venv\Lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\kubin\venv\Lib\site-packages\gradio\utils.py", line 703, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "D:\kubin\src\ui_blocks\t2i.py", line 290, in generate
return generate_fn(params)
^^^^^^^^^^^^^^^^^^^
File "D:\kubin\src\web_gui.py", line 43, in <lambda>
generate_fn=lambda params: kubin.model.t2i(params),
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\kubin\src\models\model_diffusers22\model_22.py", line 153, in t2i
prior, decoder = self.prepareModel(task)
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\kubin\src\models\model_diffusers22\model_22.py", line 89, in prepareModel
prior, decoder = prepare_weights_for_task(self, task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\kubin\src\models\model_diffusers22\model_22_init.py", line 233, in prepare_weights_for_task
current_decoder.disable_xformers_memory_efficient_attention()
File "D:\kubin\venv\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1647, in disable_xformers_memory_efficient_attention
self.set_use_memory_efficient_attention_xformers(False)
File "D:\kubin\venv\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1667, in set_use_memory_efficient_attention_xformers
fn_recursive_set_mem_eff(module)
File "D:\kubin\venv\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1657, in fn_recursive_set_mem_eff
module.set_use_memory_efficient_attention_xformers(valid, attention_op)
File "D:\kubin\venv\Lib\site-packages\diffusers\models\modeling_utils.py", line 227, in set_use_memory_efficient_attention_xformers
fn_recursive_set_mem_eff(module)
File "D:\kubin\venv\Lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "D:\kubin\venv\Lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "D:\kubin\venv\Lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff
fn_recursive_set_mem_eff(child)
File "D:\kubin\venv\Lib\site-packages\diffusers\models\modeling_utils.py", line 220, in fn_recursive_set_mem_eff
module.set_use_memory_efficient_attention_xformers(valid, attention_op)
File "D:\kubin\venv\Lib\site-packages\diffusers\models\attention_processor.py", line 259, in set_use_memory_efficient_attention_xformers
processor.load_state_dict(self.processor.state_dict())
File "D:\kubin\venv\Lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LoRAAttnProcessor2_0:
Unexpected key(s) in state_dict: "add_k_proj_lora.down.weight", "add_k_proj_lora.up.weight", "add_v_proj_lora.down.weight", "add_v_proj_lora.up.weight".
Yes, Kandinsky LoRA includes two files, so both paths should be filled.
oky,new error.After I have loaded lora and finished sampling once, the following error occurs when the image is generated again
I had to release all the models and reload them in order to sample the images again
Do I need to enable both the prior lora and the decoder lora for lora to work properly?