Xilinx / pytorch-ocr

Other
35 stars 11 forks source link

Exported models not working with LSTM-PYNQ? #4

Open jmevert opened 3 years ago

jmevert commented 3 years ago

Hi,

I know that this is a sort of old project and is no longer maintained, but I thought that asking was worth a shot!

So I've been using this repository with the intended dependencies not changing much, and attempted to apply the network architecture to a different problem. This worked, I got a model that achieves relatively good results, and re-trained it with quantized activations and fully-connected weights. The export step (of the .hpp) worked as well, so did the export of a test image.

However, when I try to use my exported model together with LSTM-PYNQ, the simulation does not produce the intended output, i.e. the model in this repository and the one in LSTM-PYNQ come to different results. I also noticed that parameters "NUMBER_OF_CLASSES" and "NUMBER_OF_CLASSES_TYPEWIDTH" do not get exported.

Would appreciate any input, thanks!

volcacius commented 3 years ago

Hello,

Always good to ask. Are you using this branch of Brevitas (formerly known as pytorch-quantization)? https://github.com/Xilinx/brevitas/tree/lstm-pynq . How different are the results? If I remember correctly (it's has been a few years) I wasn't doing any cell-state quantization at training time, and in hardware it was implemented as 16bit fixed point, so definitely they might not be bit-accurate, but they should give similar results. Regarding the export in general, the implementation I did was very specific (and weak) to this particular network/target hardware, so I wouldn't be surprised if something was failing silently as soon as you change anything. I would step-by-step debug it to see if anything looks weird. As a side note, I'd love to bring to this repo back up to date, especially with more modern quantization techniques, but I have too much on my todo list at the moment. Maybe next year.

Alessandro