unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.7k stars 1.31k forks source link

AutoModelForSequenceClassification or output is only one token #768

Open shyoulala opened 4 months ago

shyoulala commented 4 months ago

I am using AutoModelForSequenceClassification for classifying a large model. Can I use this library, and how should I use it? Additionally, if my output is only one token and I do batch inference, will this library also provide acceleration? Thank you for your response.

shyoulala commented 4 months ago

large model is gemma2 9b

shyoulala commented 4 months ago

Training can be accelerated, but can inference be accelerated as well in such a scenario?

danielhanchen commented 4 months ago

Yes inference is 2x faster via Unsloth, however batched inference is all matrix multiplication bound, so speedups will be much less