unslothai / unsloth

Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
15.68k stars 1.06k forks source link

AutoModelForSequenceClassification or output is only one token #768

Open shyoulala opened 2 months ago

shyoulala commented 2 months ago

I am using AutoModelForSequenceClassification for classifying a large model. Can I use this library, and how should I use it? Additionally, if my output is only one token and I do batch inference, will this library also provide acceleration? Thank you for your response.

shyoulala commented 2 months ago

large model is gemma2 9b

shyoulala commented 2 months ago

Training can be accelerated, but can inference be accelerated as well in such a scenario?

danielhanchen commented 2 months ago

Yes inference is 2x faster via Unsloth, however batched inference is all matrix multiplication bound, so speedups will be much less