predibase / lorax

Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
https://loraexchange.ai
Apache License 2.0
2.08k stars 138 forks source link

if LoRAX is based on punica kernels will it be able to support LoRA Adapters for Mistral NeMO 12B? #549

Open tensimixt opened 1 month ago

tensimixt commented 1 month ago

Feature request

if LoRAX is based on punica kernels will it be able to support LoRA Adapters for Mistral NeMO 12B? which has a vocab size > 130k. Currently Vllm for example doesn't support vocab_size > 128512 when enable_lora=True

I think if huggingface and LoRAX are based on punica kernels they will also have this limitation or this limitation does not exist for TGI and LoRAX?

Thank you!

Motivation

be able to run inference with Mistral NeMO + LoRA Adapter (in a multi-lora world)

Your contribution

Checked various deployment providers and found the limitation

Nero10578 commented 3 weeks ago

Feature request

if LoRAX is based on punica kernels will it be able to support LoRA Adapters for Mistral NeMO 12B? which has a vocab size > 130k. Currently Vllm for example doesn't support vocab_size > 128512 when enable_lora=True

I think if huggingface and LoRAX are based on punica kernels they will also have this limitation or this limitation does not exist for TGI and LoRAX?

Thank you!

Motivation

be able to run inference with Mistral NeMO + LoRA Adapter (in a multi-lora world)

Your contribution

Checked various deployment providers and found the limitation

did you figure out if Mistral Nemo 12B works with lora adapters with lorax? It does not work with VLLM or Aphrodite still and I am looking for alternatives.