Swall0w / papers

This is a repository for summarizing papers especially related to machine learning.
65 stars 7 forks source link

Deep Learning with Limited Numerical Precision #694

Open Swall0w opened 5 years ago

Swall0w commented 5 years ago

Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, Pritish Narayanan

Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding.

https://arxiv.org/abs/1502.02551

yegane-AI commented 4 years ago

Is there a way to find the codes of this article? I tried to write the codes with python and Tensorflow but when I wanted to convert data types into int32 and int16, I encounter a Type error: TypeError: Cannot convert 0.0 to EagerTensor of dtype int32. Anyone can help me?