Open DeepKnowledge1 opened 11 months ago
I have optimized the inference time, so now it is about 50 fps, above to that the model can be exported to onnx and openvino.
It also exceed the time of the anomalib implementation ,
I will share the code after a while
Can you share tensorrt cpp inference version
I have optimized the inference time, so now it is about 50 fps, above to that the model can be exported to onnx and openvino.
It also exceed the time of the anomalib implementation ,
I will share the code after a while