unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.37k stars 1.28k forks source link

Does tensorRT-LLM support serving 4bit quantised unsloth Llama model #1309

Open jayakommuru opened 1 day ago

jayakommuru commented 1 day ago

We want to deploy https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-bnb-4bit which is 4-bit quantized version of llama-3.2-1B model. It is quantized using bitsandbytes. Can we deploy this using tensor RT-LLM backend ? If so, is there any documentation to refer?

danielhanchen commented 7 hours ago

You can use vLLM https://docs.vllm.ai/en/latest/quantization/bnb.html