Open lgthappy opened 3 days ago
4.6 or 4.22-lite(if you are on the alpha 3 build) TensorRT is the fastest inference method, but it has to build an engine at every unique resolution you render at. It mostly depends on your GPU if you will receive real time results.
Hello, I have some questions: I would like to know which version of the model is recommended if I want to complete the x2 video frame filling as fast as possible? Is it possible to achieve real-time results by converting to tensorrt, and how many milliseconds does it take per frame? Do you have any related code?