in 157 lline of llama_icae_modeling.py:
decoder_outputs = self.icae(inputs_embeds=decoder_input_embeddings, output_hidden_states=True)
here, icae has lora parameters, because it has been added lora parameters in line 114:
self.icae = get_peft_model(self.icae, lora_config)
and has been enabled in line 148:
compress_outputs = self.icae(inputs_embeds=autoencoder_input_embedding, output_hidden_states=True, enable_lora=True)
however, the decoder of icae should be the same as llama-2-7b and should not have lora parameters
is this a bug?
in 157 lline of llama_icae_modeling.py: decoder_outputs = self.icae(inputs_embeds=decoder_input_embeddings, output_hidden_states=True) here, icae has lora parameters, because it has been added lora parameters in line 114: self.icae = get_peft_model(self.icae, lora_config) and has been enabled in line 148: compress_outputs = self.icae(inputs_embeds=autoencoder_input_embedding, output_hidden_states=True, enable_lora=True) however, the decoder of icae should be the same as llama-2-7b and should not have lora parameters is this a bug?