huggingface / optimum

🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
https://huggingface.co/docs/optimum/main/
Apache License 2.0
2.56k stars 461 forks source link

Consistent use of `"sequence-classification"` vs. `"text-classification", "audio-classification"` #171

Open fxmarty opened 2 years ago

fxmarty commented 2 years ago

Currently, transformers' FeaturesManager._TASKS_TO_AUTOMODELS to handle strings passed to load models. Notably, this is used in the ORTQuantizer.from_pretrained() method (where here, for example, feature="sequence-classification"):

https://github.com/huggingface/optimum/blob/5653a16727fc99b627d45827485b2ac0ace4c66f/optimum/onnxruntime/quantization.py#L102

In the meanwhile, pipeline abstraction for text classification expects pipeline(..., task="text-classification"). Hence it could be troublesome for users to pass both "text-classification" and "sequence-classification".

A handy workflow could be the following:

from onnxruntime.quantization import QuantFormat, QuantizationMode, QuantType
from optimum.onnxruntime import ORTQuantizer
from optimum.onnxruntime.configuration import QuantizationConfig
from optimum.onnxruntime.modeling_ort import ORTModel

from optimum.pipelines import pipeline as _optimum_pipeline
from transformers import pipeline as _transformers_pipeline

from optimum.onnxruntime.modeling_ort import ORTModelForSequenceClassification

static_quantization = False
task = "text-classification"

# Create the quantization configuration containing all the quantization parameters
qconfig = QuantizationConfig(
    is_static=static_quantization,
    format=QuantFormat.QDQ if static_quantization else QuantFormat.QOperator,
    mode=QuantizationMode.QLinearOps if static_quantization else QuantizationMode.IntegerOps,
    activations_dtype=QuantType.QInt8 if static_quantization else QuantType.QUInt8,
    weights_dtype=QuantType.QInt8,
    per_channel=False,
    reduce_range=False,
    operators_to_quantize=["Add"],
)

quantizer = ORTQuantizer.from_pretrained(
    "Bhumika/roberta-base-finetuned-sst2",
    feature=task,
    opset=15,
)

tokenizer = quantizer.tokenizer

model_path = "model.onnx"
quantized_model_path = "quantized_model.onnx"

quantization_preprocessor = None
ranges = None

# Export the quantized model
quantizer.export(
    onnx_model_path=model_path,
    onnx_quantized_model_output_path=quantized_model_path,
    calibration_tensors_range=ranges,
    quantization_config=qconfig,
    preprocessor=quantization_preprocessor,
)

ort_session = ORTModel.load_model(quantized_model_path)
ort_model = ORTModelForSequenceClassification(ort_session, config=quantizer.model.config)

task_alias = "text-classification"
ort_pipeline = _optimum_pipeline(
    task=task,
    model=ort_model,
    tokenizer=tokenizer,
    feature_extractor=None,
    accelerator="ort"
)

which currently raises KeyError: "Unknown task: text-classification for ORTQuantizer.from_pretrained().

Right now we need to pass something like

task = "text-classification"
feature = "sequence-classification"

and provide the feature to ORTQuantizer, which is troublesome.

Possible solutions are:

@lewtun

lewtun commented 2 years ago

Thanks for creating this detailed issue @fxmarty!

One challenge with unifying the "features" used in the ONNX export and the tasks defined in the pipeline() function is that one can have past key values that need to be differentiated, e.g. these two features are different:

Having said that, I agree that it would be nice if one could reuse the same task taxonomy from the transformers.pipeline() function, so maybe some light refactoring can capture the majority of tasks.

cc @michaelbenayoun who knows more of the history behind the ONNX "features" names

michaelbenayoun commented 2 years ago

Yes, I think the original feature names were chosen by looking at the classes names (BertForSequenceClassification, etc). I think @fxmarty's first suggestion could work and is easy to implement.