This change will deploy an onnx model to the device and the using tensorrt to convert the model to engine file on the target device for the best optimization. Thus we don't need to prepare a Nvidia Toolkit to build engine file in advanced.
Once the engine file is created, the engine file will be cached.
For more details please follow the readme and the jupyter notebook.
CloudWatch Metrics Integrated in the app.
using --local-image option when calling panorama-cli build command in jupyter notebook
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Issue #, if available:
Description of changes:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.