Open xuemanshanzhong opened 2 years ago
We report the latency on mobile device. To test the inference speed in mobile device, please refer to tnn_runtime. If you want to calculate the latency on GPU
, you can indeed refer to benchmark.py
.
You can give us feedback if you have any questions or results.
We report the latency on mobile device. To test the inference speed in mobile device, please refer to tnn_runtime. If you want to calculate the latency on
GPU
, you can indeed refer tobenchmark.py
.You can give us feedback if you have any questions or results.
Hi, I tried to follow tnn_runtime and calculate the latency in my Mi10 phone with snapdragon 865. But results show the tested latency is 3~4 times longer than the results in the paper. I don't know if there is something wrong with my method.
Hi, can you tell me the way you calculate the latency of the model? Do you use the benchmark.py or other codes? Thanks.