IntelLabs / nlp-architect

A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks
https://intellabs.github.io/nlp-architect
Apache License 2.0
2.94k stars 445 forks source link

How could I improve the inference performance? #154

Open ZhiyiLan opened 4 years ago

ZhiyiLan commented 4 years ago

I used the command

nlp-train transformer_glue \
    --task_name mrpc \
    --model_name_or_path bert-base-uncased \
    --model_type quant_bert \
    --learning_rate 2e-5 \
    --output_dir /tmp/mrpc-8bit \
    --evaluate_during_training \
    --data_dir /path/to/MRPC \
    --do_lower_case

to training the model and

nlp-inference transformer_glue \
    --model_path /tmp/mrpc-8bit \
    --task_name mrpc \
    --model_type quant_bert \
    --output_dir /tmp/mrpc-8bit \
    --data_dir /path/to/MRPC \
    --do_lower_case \
    --overwrite_output_dir \
    --load_quantized_model

to do inference,but got the same performance as no flag --load_quantized_model.How could I improve the inference performance?

ZhiyiLan commented 4 years ago

I print the weights of quant_pytorch_model.bin,but got dtypes some were int8 some were float and some were int32.Why aren't they all int8?

ofirzaf commented 4 years ago

Hi,

Our quantization scheme is:

  1. FC Weights are quantized to Int8
  2. FC Biases are quantized to Int32
  3. Everything else is left in FP32

For more information please refer to our published paper on this model: Q8BERT: Quantized 8Bit BERT

Regarding the flag --load_quantized_model not working for you, please make sure you are using release 0.5.1 or above. If you are still unable to run with this flag please give me all the information like which code base are you using and everything else that might be relevant.

I would like to note that in order to receive speed up from the quantized model you must run it with supporting hardware and software.

Njuapp commented 4 years ago

Hi,

Our quantization scheme is:

  1. FC Weights are quantized to Int8
  2. FC Biases are quantized to Int32
  3. Everything else is left in FP32

For more information please refer to our published paper on this model: Q8BERT: Quantized 8Bit BERT

Regarding the flag --load_quantized_model not working for you, please make sure you are using release 0.5.1 or above. If you are still unable to run with this flag please give me all the information like which code base are you using and everything else that might be relevant.

I would like to note that in order to receive speed up from the quantized model you must run it with supporting hardware and software.

I have got similar problem: inference with quantized model takes 1min54sec, while inference with unquantized one takes 1min58sec. My version is up-to-date. Is my hardware unable to take advantage of Q8BERT model ?

Njuapp commented 4 years ago

P.S. I ran inference on CPU and my CPU is as follows: Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz

Ayyub29 commented 2 years ago

Do you got the answer?