wkpark / sd-webui-model-mixer

Checkpoint model mixer/merger extension
GNU Affero General Public License v3.0
96 stars 4 forks source link

Extracting Lora/Lyco fail #134

Open killerciao opened 3 months ago

killerciao commented 3 months ago

loading original SDXL model building U-Net no_half = False loading U-Net... U-Net: None building text encoders loading text encoders... text encoder 1: text encoder 2: create LoRA network. base dim (rank): 64, alpha: 64 neuron dropout: p=None, rank dropout: p=None, module dropout: p=None create LoRA for Text Encoder 1: create LoRA for Text Encoder 2: create LoRA for Text Encoder: 264 modules. create LoRA for U-Net: 722 modules. create LoRA network. base dim (rank): 64, alpha: 64 neuron dropout: p=None, rank dropout: p=None, module dropout: p=None create LoRA for Text Encoder 1: create LoRA for Text Encoder 2: create LoRA for Text Encoder: 264 modules. create LoRA for U-Net: 722 modules. Calculate svd: 0%| | 0/986 [00:00<?, ?it/s]Text encoder is different. 0.0024471282958984375 > 0.0001ttn_k_proj 264/264 100%: lora_te2_text_model_encoder_layers_31_mlp_fc2 0/722 0%: lora_unet_down_blocks_1_attentions_0_proj_inin Traceback (most recent call last): File "H:\IA\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "H:\IA\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "H:\IA\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "H:\IA\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "H:\IA\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "H:\IA\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "H:\IA\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, **kwargs) File "H:\IA\Packages\Stable Diffusion WebUI Forge\extensions\sd-webui-model-mixer\scripts\model_mixer.py", line 4730, in extract_lora_from_current_model extracted_lora = svd(dict(state_dict_base), dict(state_dict_trained), None, lora_dim, min_diff=min_diff, clamp_quantile=clamp_quantile, device=calc_device, File "H:\IA\Packages\Stable Diffusion WebUI Forge\extensions\sd-webui-model-mixer\scripts\kohya\extract_lora_from_models.py", line 186, in svd if torch.allclose(module_t.weight, module_o.weight): RuntimeError: BFloat16 did not match Half

I tried every combination of settings but alwasys stop at 27%