Closed yacad closed 2 years ago
Hi @yacad
The times and FPS given in output from the demo are the performance of the inference part only, not of all the processing for a single frame. Was also explicit here https://github.com/AlexeyAB/darknet/issues/5354#issuecomment-621722304.
Indeed, this repo want to optimize networks inference and that's where we are faster, but we never claimed to be better in other parts of the processing. Indeed, inference is the only part that is optimized with TensorRT.
Why the rest can be slower?
The demo that we have here is just a use case to show how to use our library, not the best way to obtain the best performance.
Thank you for answer. As you said, let's test again by optimizing the pre- and post-processing.
Closing for inactivity. Feel free to reopen.
Hi.
I ran a test using about 50 seconds video to compare the speed of darknet and tkDNN. I checked the time taken from the start to the end of the video.
My test environment looks like this :
Result :
Why are these results coming out? Why does tkDNN take longer?
Each program's print statement clearly stated that tkDNN was processing it much faster.
Why are these results coming out?