This commit contains a number of tweaks to ensure exporting quantization and testing inference both work on Linux-based setups.
exportquant.py: made np.uint32 data type explicit -- on some configurations, the type int appears to be bridged by numpy into np.int64, so when the weights are packed, they're packed into 8 bytes instead of 4, and when the view is converted to np.uint32 every other entry is just 0x00000000.
BitNetMCU_MNIST_dll.c: added forward declaration for BitMnistInference and a guard for __declspec (not supported by gcc)
This commit contains a number of tweaks to ensure exporting quantization and testing inference both work on Linux-based setups.
exportquant.py
: madenp.uint32
data type explicit -- on some configurations, the typeint
appears to be bridged bynumpy
intonp.int64
, so when the weights are packed, they're packed into 8 bytes instead of 4, and when the view is converted tonp.uint32
every other entry is just0x00000000
.BitNetMCU_MNIST_dll.c
: added forward declaration forBitMnistInference
and a guard for__declspec
(not supported by gcc)requirements.txt
: Added tensorboard, matplotlibMakefile
for compilation on Linux