I finetuned bloom with loar and would like to quantize the model with GPTQ,
` self.model = AutoModelForCausalLM.from_pretrained(
self.config['checkpoint_path'],
device_map='auto',
)
load adpater
self.model = PeftModelForCausalLM.from_pretrained(self.model, '/tmp/bloom_ori/lora_bloom')`
some errors happened like:
It seems that after loading adapter, there are dimension error between alibi and attention_mask. How could I get rid of these bugs and quantize model with adapter?
I finetuned bloom with loar and would like to quantize the model with GPTQ, ` self.model = AutoModelForCausalLM.from_pretrained( self.config['checkpoint_path'], device_map='auto', )
load adpater
self.model = PeftModelForCausalLM.from_pretrained(self.model, '/tmp/bloom_ori/lora_bloom')` some errors happened like: It seems that after loading adapter, there are dimension error between alibi and attention_mask. How could I get rid of these bugs and quantize model with adapter?