I used the DistilBERT model with the SST-2 dataset for text classification. I then converted the trained model to TensorFlow Lite using float16 quantization. Here's my notebook. Then when I evaluated the float16 TensorFlow Lite model I see a tremendous performance drop (~49% validation accuracy) with respect to the original model. Here's the notebook.
@Pierrci
I used the DistilBERT model with the SST-2 dataset for text classification. I then converted the trained model to TensorFlow Lite using float16 quantization. Here's my notebook. Then when I evaluated the float16 TensorFlow Lite model I see a tremendous performance drop (~49% validation accuracy) with respect to the original model. Here's the notebook.
Am I missing out on something?