Open mamba36 opened 4 years ago
For backprop to work, training needs to be done in float precision
For backprop to work, training needs to be done in float precision
Thank you very much for your answer.But my problem is in the feedforward phase。The features and weights are both binary, the convolution result should be integer, but I'll run your code to see that the intermediate result is floating point. The image shows the binary_net_conv2d_2 layer's features. Its supposed to be an integer.
Yes, but the forward pass also needs to be done in float precision for backprop to work. For binary inference, you'll need to use another software package.
Can you tell me which package I should use?
------------------ 原始邮件 ------------------ 发件人: "yaysummeriscoming/BinaryNet_and_XNORNet" <notifications@github.com>; 发送时间: 2020年7月20日(星期一) 晚上9:37 收件人: "yaysummeriscoming/BinaryNet_and_XNORNet"<BinaryNet_and_XNORNet@noreply.github.com>; 抄送: "mamba36"<1021749442@qq.com>;"Author"<author@noreply.github.com>; 主题: Re: [yaysummeriscoming/BinaryNet_and_XNORNet] BinaryNetConv2D (#5)
Yes, but the forward pass also needs to be done in float precision for backprop to work. For binary inference, you'll need to use another software package.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Unfortunately I haven't kept up to date with the latest developments in binary networks - it's been a few years since I made this. That being said, I've heard that plumerai & mxnet have packages available
Why is the value of feature map after the BinaryNetConv2D layer not integer? The result of the convolution between binarized weights and binarized input should be integer. But when I run your code, I find that the feature maps is all decimals.