snap-research / weights2weights

Official Implementation of weights2weights
Other
121 stars 3 forks source link

error loading lora in diffusers #3

Open loboere opened 3 months ago

loboere commented 3 months ago

I try to load lora with pipe.load_lora_weights("/content/adapter_model.safetensors") but it gives an error

diffusers==0.28.0 model ='stablediffusionapi/realistic-vision-v51'


ValueError Traceback (most recent call last) in <cell line: 1>() ----> 1 pipe.load_lora_weights("/content/adapter_model.safetensors")

5 frames /usr/local/lib/python3.10/dist-packages/diffusers/loaders/lora.py in load_lora_weights(self, pretrained_model_name_or_path_or_dict, adapter_name, **kwargs) 122 low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT) 123 --> 124 self.load_lora_into_unet( 125 state_dict, 126 network_alphas=network_alphas,

/usr/local/lib/python3.10/dist-packages/diffusers/loaders/lora.py in load_lora_into_unet(cls, state_dict, network_alphas, unet, low_cpu_mem_usage, adapter_name, _pipeline) 477 is_model_cpu_offload, is_sequential_cpu_offload = cls._optionally_disable_offloading(_pipeline) 478 --> 479 inject_adapter_in_model(lora_config, unet, adapter_name=adapter_name) 480 incompatible_keys = set_peft_model_state_dict(unet, state_dict, adapter_name) 481

/usr/local/lib/python3.10/dist-packages/peft/mapping.py in inject_adapter_in_model(peft_config, model, adapter_name) 213 214 # By instantiating a peft model we are injecting randomly initialized LoRA layers into the model's modules. --> 215 peft_model = tuner_cls(model, peft_config, adapter_name=adapter_name) 216 217 return peft_model.model

/usr/local/lib/python3.10/dist-packages/peft/tuners/lora/model.py in init(self, model, config, adapter_name) 137 138 def init(self, model, config, adapter_name) -> None: --> 139 super().init(model, config, adapter_name) 140 141 def _check_new_adapter_config(self, config: LoraConfig) -> None:

/usr/local/lib/python3.10/dist-packages/peft/tuners/tuners_utils.py in init(self, model, peft_config, adapter_name) 173 self._pre_injection_hook(self.model, self.peft_config[adapter_name], adapter_name) 174 if peft_config != PeftType.XLORA or peft_config[adapter_name] != PeftType.XLORA: --> 175 self.inject_adapter(self.model, adapter_name) 176 177 # Copy the peft_config in the injected model.

/usr/local/lib/python3.10/dist-packages/peft/tuners/tuners_utils.py in inject_adapter(self, model, adapter_name, autocast_adapter_dtype) 433 # Handle X-LoRA case. 434 if not is_target_modules_in_base_model and hasattr(peft_config, "target_modules"): --> 435 raise ValueError( 436 f"Target modules {peft_config.target_modules} not found in the base model. " 437 f"Please check the target modules and try again."

ValueError: Target modules {'base_model.model.up_blocks.3.attentions.0.transformer_blocks.0.attn1.to_q', 'base_model.model.down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_v', 'base_model.model.up_blocks.2.attentions.0.transformer_blocks.0.attn2.to_q', 'base_model.model.up_blocks.2.attentions.0.transformer_blocks.0.attn1.to_v', 'base_model.model.down_blocks.2.attentions.0.transformer_blocks.0.attn1.to_q', 'base_model.model.up_blocks.2.attentions.1.transformer_blocks.0.attn1.to_q', 'base_model.model.up_blocks.3.attentions.1.transformer_blocks.0.attn2.to_v', 'base_model.model.up_blocks.2.attentions.2.transformer_blocks.0.attn1.to_v', 'base_model.model.down_blocks.2.attentions.1.transformer_blocks.0.attn1.to_q', 'base_model.model.down_blocks.0.attentions.0.transformer_blocks.0.attn1.to_v', 'base_model.model.down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_v', 'base_model.model.up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_v', 'base_model.model.up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_v', 'base_model.model.down_blocks.2.attentions.1.transformer_blocks.0.attn1.to_v', 'base_model.model.up_blocks.2.attentions.1.transformer_blocks.0.attn2.to_q', 'base_model.model.up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_q', 'base_model.model.down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_v', 'base_model.model.up_blocks.2.attentions.0.transformer_blocks.0.attn1.to_q', 'base_model.model.up_blocks.3.attentions.2.transformer_blo...

avdravid commented 3 months ago

Hi. To load the LoRA into diffusers, check the notebook in other/loading.ipynb, specifically cell 7.

loboere commented 3 months ago

it doesn't work with StableDiffusionImg2ImgPipeline, it produces a different face, do you know what's wrong?