AojunZhou / Incremental-Network-Quantization

Caffe Implementation for Incremental network quantization
Other
191 stars 74 forks source link

Can this method accelerate inference speed? #8

Open mynameischaos opened 6 years ago

AojunZhou commented 6 years ago

@mynameischaos yes, network quantization can accelerate inference speed, but your hardware must support low-precision bit shift, you can pay close attention to intel-altera and intel-movidious product.

ouceduxzk commented 6 years ago

I am curious what is the rough inference speedup on a intel cpu that support low precision bit shift, any numbers ?

victorygogogo commented 6 years ago

how to check the cpu support "low-precision bit shift" ??@Zhouaojun