microsoft / DeBERTa

The implementation of DeBERTa
MIT License
1.91k stars 215 forks source link

Convert DeBERTa model to ONNX with mixed precision #120

Open SergeyShk opened 1 year ago

SergeyShk commented 1 year ago

Hello. I'm using deberta-v3-base for a text classification task. After training I'm converting a pytorch model to ONNX format. Everything works like a charm except that the size of the model is twice the size of the original DeBERTa - ~750MB. Because of it I want to convert it with mixed precision, i.e. fp16. I tried two approaches:

  1. Run model.half() before ONNX conversion
  2. Use the following code:
from onnxruntime.transformers import optimizer
optimized_model = optimizer.optimize_model("deberta.onnx", model_type='bert', num_heads=12, hidden_size=768, use_gpu=False, opt_level=0)
optimized_model.convert_float_to_float16()
optimized_model.save_model_to_file("deberta_fp16.onnx")

But in both cases I get this error during inference on CPU:

2023-01-06 10:46:46.332352649 [W:onnxruntime:, constant_folding.cc:179 ApplyImpl] Could not find a CPU kernel and hence can't constant fold LayerNormalization node 'LayerNorm_1'
2023-01-06 10:46:46.414666254 [W:onnxruntime:, constant_folding.cc:179 ApplyImpl] Could not find a CPU kernel and hence can't constant fold LayerNormalization node 'LayerNorm_1'
2023-01-06 10:46:46.425605272 [W:onnxruntime:, constant_folding.cc:179 ApplyImpl] Could not find a CPU kernel and hence can't constant fold LayerNormalization node 'LayerNorm_1'

I also tried to set use_gpu=True in optimize_model method. Errors disappeared, but the inference time was 3-4 time slower.