Open Abdullah-kwl opened 7 months ago
@Abdullah-kwl , could you please paste the result of printing model
?
PeftModelForCausalLM( (base_model): LoraModel( (model): MistralForCausalLM( (model): MistralModel( (embed_tokens): Embedding(32000, 4096, padding_idx=2) (layers): ModuleList( (0-31): 32 x MistralDecoderLayer( (self_attn): MistralAttention( (q_proj): Linear4bit(in_features=4096, out_features=4096, bias=False) (k_proj): lora.Linear4bit( (base_layer): Linear4bit(in_features=4096, out_features=1024, bias=False) (lora_dropout): ModuleDict( (adapter_1): Dropout(p=0.1, inplace=False) (adapter_2): Dropout(p=0.1, inplace=False) (adapter_3): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (adapter_1): Linear(in_features=4096, out_features=4, bias=False) (adapter_2): Linear(in_features=4096, out_features=4, bias=False) (adapter_3): Linear(in_features=4096, out_features=4, bias=False) ) (lora_B): ModuleDict( (adapter_1): Linear(in_features=4, out_features=1024, bias=False) (adapter_2): Linear(in_features=4, out_features=1024, bias=False) (adapter_3): Linear(in_features=4, out_features=1024, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (v_proj): lora.Linear4bit( (base_layer): Linear4bit(in_features=4096, out_features=1024, bias=False) (lora_dropout): ModuleDict( (adapter_1): Dropout(p=0.1, inplace=False) (adapter_2): Dropout(p=0.1, inplace=False) (adapter_3): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (adapter_1): Linear(in_features=4096, out_features=4, bias=False) (adapter_2): Linear(in_features=4096, out_features=4, bias=False) (adapter_3): Linear(in_features=4096, out_features=4, bias=False) ) (lora_B): ModuleDict( (adapter_1): Linear(in_features=4, out_features=1024, bias=False) (adapter_2): Linear(in_features=4, out_features=1024, bias=False) (adapter_3): Linear(in_features=4, out_features=1024, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (o_proj): Linear4bit(in_features=4096, out_features=4096, bias=False) (rotary_emb): MistralRotaryEmbedding() ) (mlp): MistralMLP( (gate_proj): Linear4bit(in_features=4096, out_features=14336, bias=False) (up_proj): Linear4bit(in_features=4096, out_features=14336, bias=False) (down_proj): lora.Linear4bit( (base_layer): Linear4bit(in_features=14336, out_features=4096, bias=False) (lora_dropout): ModuleDict( (adapter_1): Dropout(p=0.1, inplace=False) (adapter_2): Dropout(p=0.1, inplace=False) (adapter_3): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (adapter_1): Linear(in_features=14336, out_features=4, bias=False) (adapter_2): Linear(in_features=14336, out_features=4, bias=False) (adapter_3): Linear(in_features=14336, out_features=4, bias=False) ) (lora_B): ModuleDict( (adapter_1): Linear(in_features=4, out_features=4096, bias=False) (adapter_2): Linear(in_features=4, out_features=4096, bias=False) (adapter_3): Linear(in_features=4, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (act_fn): SiLU() ) (input_layernorm): MistralRMSNorm() (post_attention_layernorm): MistralRMSNorm() ) ) (norm): MistralRMSNorm() ) (lm_head): CastOutputToFloat( (0): Linear(in_features=4096, out_features=32000, bias=False) ) ) ) (internal_xlora_classifier): xLoRAClassifier( (softmax): TemperatureScaledSoftmax( (softmax): Softmax(dim=-1) ) (inner): ModuleList( (0): Linear(in_features=4096, out_features=2048, bias=True) (1-6): 6 x Linear(in_features=2048, out_features=2048, bias=True) ) (last): Linear(in_features=2048, out_features=3, bias=True) ) )
I have tested your updated code https://github.com/EricLBuehler/xlora/pull/25
currently quantized model are trained using xlora , it start working with quantized model but facing issue when I want to make inference with trained quantized xlora model.
facing error RecursionError: maximum recursion depth exceeded while calling a Python object
You can review my notebook at : https://colab.research.google.com/drive/1_B1ualsMbRfYWy0gdjdMi9RSDU-qmPHf#scrollTo=I4UZaqDAnnB6
Thank you. I plan on working on this later today.
Also, Checkout this notebook : https://colab.research.google.com/drive/1Eyh-mBd0LpcJwyzBHjGKhwNLQ9R74eLl?usp=drive_open
Verify that a few lines are being repeated in the output.
What adjustments should we make if we wish to upgrade XLora for IA^3?
@Abdullah-kwl, we have begun work here and it will be completed shortly.
to train xlora on free collab we need to load a quantized model but currently, xlora does not support the quantized model and layers are not swapping. Please upgrade xlora for the quantized model, mostly uses BitsAndBytesConfig to load the model in 4-bit or 8bit in free collab, But the quantized model could not convert into xlora so please update xlora for quantized models.