I have read bnn::bconv_3x3 and old version pack_128.
BNN was invented to deploy deep learning model to edge device, such as Raspberry PI, RK3308 and so on. RK3308 's memory is only about 32MB, so we have to think carefully about whether we need to packed_weight. After all, an app requires more than one model.
On the other hand, the code was reorder input then unpack the result, and I was worried that the overall efficiency was no better than using it directly in the normal order.
I have read
bnn::bconv_3x3
and old versionpack_128
.BNN was invented to deploy deep learning model to edge device, such as Raspberry PI, RK3308 and so on. RK3308 's memory is only about 32MB, so we have to think carefully about whether we need to
packed_weight
. After all, an app requires more than one model.On the other hand, the code was reorder input then
unpack
the result, and I was worried that the overall efficiency was no better than using it directly in the normal order.