This repo contains the pip install package for Quantized LSTM on PYNQ. Currently one overlay is included, that performs Optical Character Recognition (OCR) of old German Fraktur text and a plain-text dataset provided by Insiders Technologies GmbH.
If you find it useful, we would appreciate a citation to:
FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs, V. Rybalkin, A. Pappalardo, M. M. Ghaffar, G. Gambardella, N. Wehn, M. Blott. Accepted for publication, 28th International Conference on Field Programmable Logic and Applications (FPL), August, 2018, Dublin, Ireland.
BibTeX:
@ARTICLE{2018arXiv180704093R,
author = {{Rybalkin}, V. and {Pappalardo}, A. and {Mohsin Ghaffar}, M. and
{Gambardella}, G. and {Wehn}, N. and {Blott}, M.},
title = "{FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1807.04093},
primaryClass = "cs.CV",
keywords = {Computer Science - Computer Vision and Pattern Recognition, Computer Science - Hardware Architecture, Computer Science - Machine Learning},
year = 2018,
month = jul
}
This repo is a joint release of University of Kaiserslautern, Microelectronic System Design Research Group: Vladimir Rybalkin, Mohsin Ghaffar, Norbert Wehn in cooperation with Xilinx, Inc.: Alessandro Pappalardo, Giulio Gambardella, Michael Gross, Michaela Blott.
In order to install it to your PYNQ (on PYNQ v2.0), connect to the board, open a terminal and type:
sudo pip3.6 install git+https://github.com/xilinx/LSTM-PYNQ.git
This will install the LSTM-PYNQ package to your board, and create a lstm directory in the Jupyter home area. You will find the Jupyter notebooks to test the LSTM in this directory.
The repo is organized as follows:
/home/xilinx/jupyter_notebooks/lstm/
folder.In order to rebuild the hardware designs, the repo should be cloned in a machine with installation of the Vivado Design Suite (tested with 2017.4). Following the step-by-step instructions:
<clone_path>/LSTM_PYNQ/lstm/src/network/
<clone_path>/LSTM_PYNQ/lstm/src/
./make-hw.sh {dataset} {network} {platform} {mode}
where:
<clone_path>/LSTM_PYNQ/lstm/src/network/<dataset>
;h
to launch Vivado HLS synthesis, b
to launch the Vivado project (needs HLS synthesis results), a
to launch both.<clone_path>/LSTM_PYNQ/lstm/src/network/output/
that is organized as follows:
<pip_installation_path>/lstm/bitstreams/