jkjung-avt / tensorrt_demos

TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet
https://jkjung-avt.github.io/
MIT License
1.74k stars 545 forks source link

Parameters count for the optimized network #542

Closed dl-infinity closed 2 years ago

dl-infinity commented 2 years ago

Thank you for your excellent work! After optimizing the network to tensorrt, how can we obtain the number of parameters in the network?

jkjung-avt commented 2 years ago

I don't know whether there's an API call in TensorRT library for checking that. Maybe you could raise the question on NVIDIA Developer Forum instead.

dl-infinity commented 2 years ago

@jkjung-avt thank you! I have raised the question on the NVIDIA Developer Forum. From your experience, do you think it is possible? I have not seen existing questions related to it.

jkjung-avt commented 2 years ago

From your experience, do you think it is possible?

Certainly. For example, when you call TensorRT library to load an engine to do inference, it needs to deserialize the model and load all parameters to GPU memory. I think it is just a matter of whether NVIDIA has made such an API public.

dl-infinity commented 2 years ago

Thank you! Would it be possible for you to look into such an API? I looked for the API over here TensorRT API but do not quite understand if there is some function available or not. I also tried to check all methods for the trt_yolo object on which we call detect method, but none of them seems to be able to count parameters.

If such an API is not available, could you suggest some ways to manually count the number of parameters in Python?

jkjung-avt commented 2 years ago

If you don't find anything in TensorRT API, just try to get an answer from NVIDIA Developer Forum. I don't have any better answer...