Closed psyhtest closed 7 years ago
The new tensorrt-time program is now complete. It supports different batch sizes, floating-point precision (fp16, fp32) and caching of created models (based on the batch size, precision and TensorRT library version). I will work on scripting the benchmarking process next.
Create a new program for benchmarking inference based on
/usr/src/gie_samples/samples/sampleGoogleNet
(GIE path on L4T), so we can do our usual batch size and model exploration.In the source code they say:
(meaning that no weights file is needed?)
So this should be like
caffe time
(whereas issue #1 is likecaffe test
).