Closed hirokic5 closed 2 years ago
@hirokic5 Almost the same with yours, I guess the high performance result reported in the official paper only include the network inference time.
Thanks your reply ! I understood.
@hirokic5 Almost the same with yours, I guess the high performance result reported in the official paper only include the network inference time.
It's also similar to me (even slower).
What does "the inference time" mean?
For only a single line, or a single execution of pred_lines
?
Thanks for your great repository !!!
I ran demo.py, and measured large model performance speed, it resulted in almost 12FPS on RTX 2080Ti. Codes likes below:
So, in your environment, how fast does large model work ?? (and I'd like to know your GPU envirnoment ).