Open txy00001 opened 2 months ago
Hi, maybe you can use Motion_Deblurring/test_speed.py to get inference time. For onnx/trt inference, this is a good suggestion, and I have not tried to convert my model to onnx/trt. I have put it into plan in my next work.
I am very grateful for such a good defuzzing framework for open source,
I would like to ask if there is a time-consuming indicator for the inference of the model? Is there an indicator of how fast I can reason on a generic video or image? Also, is there a script for reasoning general scenarios and daily scenes?
I would like to ask you if the model can be converted to onnx/trt inference? Used for deployment?