EricLBuehler / xlora

X-LoRA: Mixture of LoRA Experts
Apache License 2.0
178 stars 8 forks source link

Would you kindly update Xlora to support Quantized Models? #24

Open Abdullah-kwl opened 7 months ago

Abdullah-kwl commented 7 months ago

to train xlora on free collab we need to load a quantized model but currently, xlora does not support the quantized model and layers are not swapping. Please upgrade xlora for the quantized model, mostly uses BitsAndBytesConfig to load the model in 4-bit or 8bit in free collab, But the quantized model could not convert into xlora so please update xlora for quantized models. Screenshot 2024-03-20 052216

EricLBuehler commented 7 months ago

@Abdullah-kwl , could you please paste the result of printing model?

Abdullah-kwl commented 7 months ago

PeftModelForCausalLM( (base_model): LoraModel( (model): MistralForCausalLM( (model): MistralModel( (embed_tokens): Embedding(32000, 4096, padding_idx=2) (layers): ModuleList( (0-31): 32 x MistralDecoderLayer( (self_attn): MistralAttention( (q_proj): Linear4bit(in_features=4096, out_features=4096, bias=False) (k_proj): lora.Linear4bit( (base_layer): Linear4bit(in_features=4096, out_features=1024, bias=False) (lora_dropout): ModuleDict( (adapter_1): Dropout(p=0.1, inplace=False) (adapter_2): Dropout(p=0.1, inplace=False) (adapter_3): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (adapter_1): Linear(in_features=4096, out_features=4, bias=False) (adapter_2): Linear(in_features=4096, out_features=4, bias=False) (adapter_3): Linear(in_features=4096, out_features=4, bias=False) ) (lora_B): ModuleDict( (adapter_1): Linear(in_features=4, out_features=1024, bias=False) (adapter_2): Linear(in_features=4, out_features=1024, bias=False) (adapter_3): Linear(in_features=4, out_features=1024, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (v_proj): lora.Linear4bit( (base_layer): Linear4bit(in_features=4096, out_features=1024, bias=False) (lora_dropout): ModuleDict( (adapter_1): Dropout(p=0.1, inplace=False) (adapter_2): Dropout(p=0.1, inplace=False) (adapter_3): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (adapter_1): Linear(in_features=4096, out_features=4, bias=False) (adapter_2): Linear(in_features=4096, out_features=4, bias=False) (adapter_3): Linear(in_features=4096, out_features=4, bias=False) ) (lora_B): ModuleDict( (adapter_1): Linear(in_features=4, out_features=1024, bias=False) (adapter_2): Linear(in_features=4, out_features=1024, bias=False) (adapter_3): Linear(in_features=4, out_features=1024, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (o_proj): Linear4bit(in_features=4096, out_features=4096, bias=False) (rotary_emb): MistralRotaryEmbedding() ) (mlp): MistralMLP( (gate_proj): Linear4bit(in_features=4096, out_features=14336, bias=False) (up_proj): Linear4bit(in_features=4096, out_features=14336, bias=False) (down_proj): lora.Linear4bit( (base_layer): Linear4bit(in_features=14336, out_features=4096, bias=False) (lora_dropout): ModuleDict( (adapter_1): Dropout(p=0.1, inplace=False) (adapter_2): Dropout(p=0.1, inplace=False) (adapter_3): Dropout(p=0.1, inplace=False) ) (lora_A): ModuleDict( (adapter_1): Linear(in_features=14336, out_features=4, bias=False) (adapter_2): Linear(in_features=14336, out_features=4, bias=False) (adapter_3): Linear(in_features=14336, out_features=4, bias=False) ) (lora_B): ModuleDict( (adapter_1): Linear(in_features=4, out_features=4096, bias=False) (adapter_2): Linear(in_features=4, out_features=4096, bias=False) (adapter_3): Linear(in_features=4, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (act_fn): SiLU() ) (input_layernorm): MistralRMSNorm() (post_attention_layernorm): MistralRMSNorm() ) ) (norm): MistralRMSNorm() ) (lm_head): CastOutputToFloat( (0): Linear(in_features=4096, out_features=32000, bias=False) ) ) ) (internal_xlora_classifier): xLoRAClassifier( (softmax): TemperatureScaledSoftmax( (softmax): Softmax(dim=-1) ) (inner): ModuleList( (0): Linear(in_features=4096, out_features=2048, bias=True) (1-6): 6 x Linear(in_features=2048, out_features=2048, bias=True) ) (last): Linear(in_features=2048, out_features=3, bias=True) ) )

Abdullah-kwl commented 7 months ago

I have tested your updated code https://github.com/EricLBuehler/xlora/pull/25

currently quantized model are trained using xlora , it start working with quantized model but facing issue when I want to make inference with trained quantized xlora model.

facing error RecursionError: maximum recursion depth exceeded while calling a Python object

Screenshot 2024-03-26 170126 Screenshot 2024-03-26 170248

Abdullah-kwl commented 7 months ago

You can review my notebook at : https://colab.research.google.com/drive/1_B1ualsMbRfYWy0gdjdMi9RSDU-qmPHf#scrollTo=I4UZaqDAnnB6

EricLBuehler commented 7 months ago

Thank you. I plan on working on this later today.

Abdullah-kwl commented 7 months ago

Also, Checkout this notebook : https://colab.research.google.com/drive/1Eyh-mBd0LpcJwyzBHjGKhwNLQ9R74eLl?usp=drive_open

Verify that a few lines are being repeated in the output.

Abdullah-kwl commented 7 months ago

What adjustments should we make if we wish to upgrade XLora for IA^3?

EricLBuehler commented 7 months ago

@Abdullah-kwl, we have begun work here and it will be completed shortly.

TheTahaaa commented 2 months ago

Hi @EricLBuehler ,

Just wanted to make sure that the current version supports Quantised models since I think some tests haven't been passed here, and the commit hasn't been merged to the main branch.