Closed ghost closed 2 years ago
This sample uses 20 image files contained here. So, 20 = 10 x 2 = CAL_BATCH_SIZE (10) x CAL_NB_BATCHES (2)
So now i know that CAL_BATCH_SIZE = number_of_ppm_images but why CAL_NB_BATCHES = 2 and CAL_BATCH_SIZE = 10
No reason. I just set those numbers. They can be CAL_NB_BATCHES = 4 and CAL_BATCH_SIZE = 5
Hmm im little confused so i guess i have to bruteforce a little until i find good parameters i guess? Or is there at least 1 approach i can use. Because rn i dont know what to set my parameters to or where to start. And what about CAL_SCALE? Is it just calculated when i run Calibration or is it important to set a value before that
NVIDIA Developer Forums would be an appropriate place to ask the questions. Also, I suggest reading an official sample program (sampleINT8.cpp) because I referred to this sample code.
Environment (Hardware)
(Nvidia stuff) Cuda 11.3.1 Cudnn 8.4.1 TensorRT 8.4.2.4
Question
So i followed the steps for setting up custom calibration but i wonder if i have to input all my training images because i have over 10k images and maybe its not necessary. If its better for later accuracy ill do it i just didnt find good documentation about optimal calibration settings. Also wondered wether the CAL_BATCH_SIZE is also like in training a value for making training/calibration faster? CAL_NB_BATCHES i dont know what it really is. I just got confused by the value 2 inside the inference_helper_tensorrt.cpp From what i understood from this article https://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf I thought CAL_SCALE will be calculated while calibrating? CAL_SCALE and CAL_BIAS should be same value as training but im not sure whats meant by that exactly. I trained my model on pytorch yolov7 and converted it to onnx but didnt see or modified any of those 2 Parameters thats why im asking here. Since its a nooby question thread i apologize in advance