When I run the test_enet.py, I see only about 0.5~1fps inference time with 1024x768 resolution. There must be other way to check exact inference time I think. Please give me advice. Thank you.
This greatly depends on the hardware you used and seems to be a duplicated problem of #19. Have you checked if the bottleneck comes from the data feeding part rather than model inference?
When I run the test_enet.py, I see only about 0.5~1fps inference time with 1024x768 resolution. There must be other way to check exact inference time I think. Please give me advice. Thank you.