Open ChenCong7375 opened 6 years ago
These layers (ConvX) use 1-bit operations (xnor, bit_count) instead of float-32-bit operations (multiply, add). It is at least 5x times faster. It looks like that 1-bit weights and inputs are enough for the middle layers: https://github.com/AlexeyAB/darknet/issues/1472
XNOR-gemm is optimized with AVX2: https://github.com/AlexeyAB/darknet/blob/57e878b4f9512cf9995ff6b5cd6e0d7dc1da9eaf/src/gemm.c#L840-L897
thank u I am going to train a model with the cfg the faster the better
It is fantastic to see something new in darknet! You fixed yolov3-tiny_xnors.cfg but I don't really understand the ConvX layers(xnor=1). Could you please explain why you choose the ConvX layers ?