Xilinx / BNN-PYNQ

Quantized Neural Networks (QNNs) on PYNQ
https://xilinx.github.io/finn/
BSD 3-Clause "New" or "Revised" License
668 stars 307 forks source link

Training of WnAn #60

Closed mohdumar644 closed 5 years ago

mohdumar644 commented 5 years ago

I noticed an update to the BNN-PYNQ library adding support for WnAn.

Can you please give a pointer as to what quantization scheme are you using, and what framework?

From the finnthesizer, it does not look like you are using DoReFa-Net like you used in QNN-MO-PYNQ.

nickfraser commented 5 years ago

We're planning to release the training scripts by the end of this week - I hope this will answer all of your questions.

mohdumar644 commented 5 years ago

Can you link a research paper, if you have implemented that?

nickfraser commented 5 years ago

Yes, it is this one. The small change that we do not allow the largest negative value to be represented by the weights.

I.e., for 2-bit weights we allow the values [-1, 0, 1]

Sorry for the delay, we will update the repo with this training code, but some other priorities have taken over in the short term.

mohdumar644 commented 5 years ago

Thanks, but your paper did not define the quantization function exactly, and I could not decipher the function from the finnthesizer. Are there more any more hints?

mohdumar644 commented 5 years ago

What does the s0.5 and s0.25 imply in the given .npz names?

nickfraser commented 5 years ago

There is actually no (relevant) meaning to those. It's related to the topological description of CNV & LFC - these values are simply hardcoded into the cnv/lfc.py files respectively.

Apologies again for the delay, we plan to upload the training scripts for these this week.

mohdumar644 commented 5 years ago

Thanks. I was able to train in Theano a W1A2 and a W1A4 network using the -.5,.5 and equidistant thresholds respectively. I would like to see your scripts though.