Closed zhishanz closed 1 year ago
There are a few things you need to make sure during inference:
Infering with TensorRT could be much faster.
(Composition of infering time: backbone infer time + classifier infer time + post-process time. The backbone inference takes the majority of inference time, and it is ~10ms)
It is unfair to compare the inference time of TensorRT with PatchCore, as TensorRT is not utilized in the latter.
Why can't the frame rate reach 77fps in local computer test, using 3090gpu, reasoning speed is 0.3-0.5s per graph(288*288)