akirasosa / mobile-semantic-segmentation

Real-Time Semantic Segmentation in Mobile device
MIT License
715 stars 135 forks source link

Is the inference time of the model related to the parameter quantity of the model? #32

Closed InstantWindy closed 6 years ago

InstantWindy commented 6 years ago

How do you calculate the GFLOPS of a model? The less the parameter quantity of the model, the shorter the inference time of the model?

akirasosa commented 6 years ago

I have rewrite code using PyTorch. It looks that there are some tools to calculate GFLOPS in PyTorch.

The less the parameter quantity of the model, the shorter the inference time of the model?

Generally speaking, Yes. Please take a look at my article about more detail. Real-Time deep learning in mobile application