🤗 Optimum Neuron is the interface between the 🤗 Transformers library and AWS Accelerators including AWS Trainium and AWS Inferentia. It provides a set of tools enabling easy model loading, training and inference on single- and multi-Accelerator settings for different downstream tasks. The list of officially validated models and tasks is available here. Users can try other models and tasks with only few changes.
To install the latest release of this package:
pip install --upgrade-strategy eager optimum[neuronx]
pip install --upgrade-strategy eager optimum[neuron]
Optimum Neuron is a fast-moving project, and you may want to install it from source:
pip install git+https://github.com/huggingface/optimum-neuron.git
Alternatively, you can install the package without pip as follows:
git clone https://github.com/huggingface/optimum-neuron.git cd optimum-neuron python setup.py install
Make sure that you have installed the Neuron driver and tools before installing optimum-neuron
, more extensive guide here.
Last but not least, don't forget to install the requirements for every example:
cd <example-folder>
pip install -r requirements.txt
🤗 Optimum Neuron was designed with one goal in mind: to make training and inference straightforward for any 🤗 Transformers user while leveraging the complete power of AWS Accelerators.
There are two main classes one needs to know:
The NeuronTrainer is very similar to the 🤗 Transformers Trainer, and adapting a script using the Trainer to make it work with Trainium will mostly consist in simply swapping the Trainer class for the NeuronTrainer one. That's how most of the example scripts were adapted from their original counterparts.
from transformers import TrainingArguments
+from optimum.neuron import NeuronTrainer as Trainer
training_args = TrainingArguments(
# training arguments...
)
# A lot of code here
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args, # Original training arguments.
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=data_collator,
)
You can compile and export your 🤗 Transformers models to a serialized format before inference on Neuron devices:
optimum-cli export neuron \
--model distilbert-base-uncased-finetuned-sst-2-english \
--batch_size 1 \
--sequence_length 32 \
--auto_cast matmul \
--auto_cast_type bf16 \
distilbert_base_uncased_finetuned_sst2_english_neuron/
The command above will export distilbert-base-uncased-finetuned-sst-2-english
with static shapes: batch_size=1
and sequence_length=32
, and cast all matmul
operations from FP32 to BF16. Check out the exporter guide for more compilation options.
Then you can run the exported Neuron model on Neuron devices with NeuronModelForXXX
classes which are similar to AutoModelForXXX
classes in 🤗 Transformers:
from transformers import AutoTokenizer
-from transformers import AutoModelForSequenceClassification
+from optimum.neuron import NeuronModelForSequenceClassification
# PyTorch checkpoint
-model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
+model = NeuronModelForSequenceClassification.from_pretrained("distilbert_base_uncased_finetuned_sst2_english_neuron")
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
inputs = tokenizer("Hamilton is considered to be the best musical of past years.", return_tensors="pt")
logits = model(**inputs).logits
print(model.config.id2label[logits.argmax().item()])
# 'POSITIVE'
Check out the documentation of Optimum Neuron for more advanced usage.
If you find any issue while using those, please open an issue or a pull request.
This repository maintains a text-generation-inference (TGI) docker image for deployment on AWS inferentia2.