YonghaoHe / LFD-A-Light-and-Fast-Detector

LFD is a big update upon LFFD. Generally, LFD is a multi-class object detector characterized by lightweight, low inference latency and superior precision. It is for real-world appilcations.
416 stars 83 forks source link

int8 calibrator #5

Closed dexception closed 3 years ago

dexception commented 3 years ago

I have made the following changes in the file lfd/deployment/tensorrt/build_engine.py

assert int8_calibrator is not None, 'calibrator is not provided!'

if precision_mode == 'int8':
    config.int8_calibrator = INT8Calibrator(data='',cache_file='int8_calibrator_cache')#int8_calibrator

I don't understand the data parameter. Can you please help with passing images in this data attribute?

ashuezy commented 3 years ago

+1

YonghaoHe commented 3 years ago

@dexception data is a big batch with shape (N, C, H, W) and type of numpy array (float32) . for example, we plan to use 100 images to do calibration, you can follow the steps below:

  1. align all images so that they have the same height/width/channel. cropping and padding are both OK.
  2. apply normalization to each image. the normalization should be the same as the one used in training.
  3. put all processed images to a numpy array (100, c, h ,w) as data. Recently, I will add this example to the repo.
dexception commented 3 years ago

@YonghaoHe I will wait for your example.

Would love to see the stats related to drop in accuracy and improvements in speed in int8 mode vs fp16.

YonghaoHe commented 3 years ago

@dexception @ashuezy I have implemented INT8 inference, you can check timing_inference_latency.py and predict_tensorrt.py. Also, in README.md, I have updated the INT8 inference latency.

dexception commented 3 years ago

@YonghaoHe Very good accuracy with int8 implementation. I ran the timing_inference on 2080Ti for XS and following are the results with 6GB memory.

fp32 : 329 FPS fp16: 449 FPS int8: 480 FPS

YonghaoHe commented 3 years ago

@dexception 😄