deepwavedigital / gr-wavelearner

Perform deep learning inference on signals
GNU General Public License v3.0
59 stars 26 forks source link

GR-WAVELEARNER

Incorporate Deep Learning into GNU Radio

Author

This software is written by Deepwave Digital, Inc. [www.deepwavedigital.com]().

 

Inquiries

 

Description

This out of tree (OOT) module for GNU Radio contains code to provide an interface to call NVIDIA's TensorRT deep learning binaries from a GNU Radio flowgraph. TensorRT allows for deep learning networks to be optimized for inference operations on an NVIDIA graphics processing units (GPU).

For an example of how to use GR-Wavelearner, see our presentation here.

 

Dependencies:

 

Requirments:

 

Current Blocks

 

How to Build and Install gr-wavelearner on Ubuntu

  1. Install Dependencies listed above (no seriously, make sure they are installed)

    • Make sure you can import gnuradio and tensorrt from the same python environment in which you are installing gr-wavelearner
  2. Clone the gr-wavelearner repo

    $ git clone https://github.com/deepwavedigital/gr-wavelearner.git
  3. This step may not be necessary if installing on the NVIDIA Jetson TX2. Check to make sure LD_LIBRARY_PATH and PATH environmental variables are properly set according to your CUDA install. This can typically be accomplished by placing the following at the end of your .bashrc file:

    # CUDA installation path
    export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64:$LD_LIBRARY_PATH
    export PATH=/usr/local/cuda-9.0/bin:$PATH

    and then run:

    $ source ~/.bashrc
  4. Install the OOT Module

    $ cd gr-wavelearner
    $ mkdir build
    $ cd build
    $ cmake ../
    $ make
    $ sudo make install
    $ sudo ldconfig
  5. To uninstall gr-wavelearner Blocks from GNU Radio Companion

    cd gr-wavelearner/build
    sudo make uninstall

 

 

Screen Shots

Example Flow Graph of Deep Learning Classifier

 

Inference Block

 

Terminal Sink Block

 

General Workflow for Creating Applications

  1. Train deep learning model (we suggest TensorFlow)
  2. Export deep learning model to a UFF file
  3. Using TensorRT, optimize the UFF file into a .plan engine file. Note that this stage must be performed on the system in which you are deploying your network. We provide an example of how to convert a UFF file to a PLAN file in examples/uff2plan.py.
  4. Load .plan engine file into the wavelearner.inference block.
  5. Update the batch_size, input_length, and output_length to match that of your deep learning model.

 

Troubleshooting

Software Versions Tested Application Notes
Ubuntu 16.04
Windows 10 Tested with TensorRT 5.0
Jetpack 3.0, 3.3
CUDA 9.0
cuDNN 7.2, 7.3
TensorRT 3.0, 4.0, 5.0

 

Known Issues / Future Enhancements

 

Tags

Deep Learning, Artificial Intelligence, Machine Learning, TensorRT, GPU, Deepwave Digital, AIR-T, Jetson, NVIDIA, GNU Radio

 

Credits and License

GR-WAVELEARNER is designed and written by Deepwave Digital, Inc. [www.deepwavedigital.com]() and is licensed under the GNU General Public License. Copyright notices at the top of source files.

 

References

[1] NVIDIA TensorRT - Programmable Inference Accelerator: [https://developer.nvidia.com/tensorrt]()

[2] GNU Radio - The Free & Open Software Radio Ecosystem: [https://www.gnuradio.org]()

[2] Making Sense of Signals Presentation- NVIDIA GPU Technology Conference: [http://on-demand.gputechconf.com/gtc/2018/video/S8375/]()