jonathanmarek1 / binarynet-tensorflow

https://jonathanmarek1.github.io/binarynet-tensorflow/
63 stars 18 forks source link

Code for binary convolution using xnor #5

Open yataoz opened 6 years ago

yataoz commented 6 years ago

I could only see regular convolutional layer (tf.nn.conv2d) applied on binary inputs and weights. Where can I find code of fast binary convolution using xnor ?

jonathanmarek1 commented 6 years ago

The code is in C files (c_ops.h and c_ops_neon.h for the fast ARM version). The code is hard to read though I am aware.

YaoQ commented 6 years ago

@jonathanmarek1 Thanks for your sharing the names2.h file and now we are trying to reproduce the result that you have done on Raspberry PI 3B, 32bit OS. When we are compiling the test_xnornet.c code on the RPI 3, we get the following error like:

gcc -c -mfloat-abi=hard -mfpu=neon-vfpv4 -mcpu=cortex-a53  -I../../include -I -I/usr/local/include   src/xnornet.c -o src/xnornet.o
In file included from src/xnornet.c:3:0:
../../include/c_ops.h: In function ‘conv2d’:
../../include/c_ops.h:257:14: error: incompatible types when assigning to type ‘uint8x16_t’ from type ‘float32x4_t’
         v##x = tmp; \
              ^
../../include/c_ops_neon.h:13:25: note: in expansion of macro ‘f’
 #define for_each_reg(f) f(0) f(1) f(2) f(3) f(4) f(5) f(6) f(7) f(8)
                         ^
../../include/c_ops.h:261:9: note: in expansion of macro ‘for_each_reg’
         for_each_reg(f)
         ^
../../include/c_ops.h:257:14: error: incompatible types when assigning to type ‘uint8x16_t’ from type ‘float32x4_t’
         v##x = tmp; \
              ^
../../include/c_ops_neon.h:13:30: note: in expansion of macro ‘f’
 #define for_each_reg(f) f(0) f(1) f(2) f(3) f(4) f(5) f(6) f(7) f(8)
                              ^
../../include/c_ops.h:261:9: note: in expansion of macro ‘for_each_reg’
         for_each_reg(f)
         ^
../../include/c_ops.h:257:14: error: incompatible types when assigning to type ‘uint8x16_t’ from type ‘float32x4_t’
         v##x = tmp; \
              ^

It said the code assign float32x4_t type to unit8x16_t type, do you have any idea where the issue is?

Thank you very much.

YaoQ commented 6 years ago

@jonathanmarek1 I just manually convert the float32x4_t to unit8x16 type: v##x = vreinterpretq_u8_f32(tmp); Then I managed to compile the code, and now I will test it.

Thanks.

YaoQ commented 6 years ago

@jonathanmarek1 Thanks for your great job and now we are using test_xnornet.c on RPi3 to test the binarnet that we generate. Although we compile the test code and generate binary file, but when we run the binary and got error:

root@firefly:~/binary-C/bin# sudo ./binary 
Illegal instruction (core dumped)

Note:

  1. Any suggestions?
  2. We just use a 227x227x3 RGB image file and type is png, is it OK?
  3. How do you run the test code on RPI3?

Thanks.

YaoQ commented 6 years ago

@jonathanmarek1 OK, finally we managed to run the binary program without error, both on RK3288 and Hi3519. But now we just use fake image with the following code:

    {
        float m[] = {0.01735949, 0.01772787, 0.01774145};
        float b[] = {-2.13645733, -2.04468092, -1.81410977};
        float *ptr = image.ptr;
        for (int i = 0; i < 227*227*3; i++)
            xf[i] = 128 * m[i % 3] + b[i % 3];
    }
  1. But we have no idea what the image do you use, NV21, RGB, YUV, or other format?
  2. What is the purpose of the code I showed above?

Thank you very much!

jonathanmarek1 commented 6 years ago
  1. RGB
  2. Code above is applying initial normalization on the input image (the constants are displayed in the log when exporting a model as 'input parameters'). For test_xnornet the 'image' file should be raw float values for a 227*227*3 image (not png).
YaoQ commented 6 years ago

@jonathanmarek1 Thanks for you reply :1st_place_medal: . It is really helpful. We saw that the XNOR-Net said the XNOR-Net of the accuracy for top-1 should be: alexnet_BWN %56.8 alexnet_XNOR %43.3

Have you benchmark the binaryNet on Android or RPi3? Is the accuracy as same as showed above?

jonathanmarek1 commented 6 years ago

The accuracy should be the same, as the calculations are the same. If it is different then it is a bug.

YaoQ commented 6 years ago

Got it. Thanks