Closed Bill-Haoyu-Lin closed 1 year ago
TX2 can only run Pytorch model currently, need to figure out a way to transform pytorch model into onnx or .engine file for faster computing time.
ONNX settled with around 20fps
reach 40fps on TX2 with deepstream
Deployed ONNX model on TX2 with live stream camera for real time detection.
Deploy cv on TX with real time video input(camera)
TX2 can only run Pytorch model currently, need to figure out a way to transform pytorch model into onnx or .engine file for faster computing time.