AlexeyAB / yolo2_light

Light version of convolutional neural network Yolo v3 & v2 for objects detection with a minimum of dependencies (INT8-inference, BIT1-XNOR-inference)
MIT License
301 stars 116 forks source link

XNOR and Binary inference #48

Open jjeong19 opened 5 years ago

jjeong19 commented 5 years ago

@AlexeyAB Hi,

First of all thanks for your great work!

I was curious about the Binarizing of the weights as well as the BIT1-XNOR inference. It seems that the weights used are binarize weights, but they seem to be in a float format. Is this because this is just an inference to the XNOR method to see its accuracy? Or am I missing out on a detail that makes these floats saved in the binary format? I was just curious cause when running the XNOR inference the weights seem to take up quite a bit of memory.

Thanks, Jason

AlexeyAB commented 5 years ago

@jjeong19 Hi,

XNOR weights are stored in .weights-file in FP32 format. During loading the XNOR weights are converted to BIN1, so both weights and input are used as BIN1, to speedup.

XNOR weights are stored in FP32 format - so this weights-file can be used for Transfer learning as pre-trained weights or for continuing training.