Open ejk43 opened 5 years ago
It appears the example "Modulation Recognition" example notebook is a bit outdated -- it was using python2 and keras pre-2.0.
I've got a fork now with some updates for python3 and more recent Keras. Of course I'm not totally sure it works with the most recent code, since my environment might be getting a little old by now, but I think it's at least slightly healthier.
I also tested two different network architectures, one with a "dense-only" approach and one with conv2d layers. Both seem to require lots of weights. Any pointers where to start looking for pruning or binary/ternary optimizations in Keras? Might be nice to try a few techniques and compare results?
@ejk43 Hi, I'm trying to explore this. But apparently the data set's link in the notebook is broken. -> https://radioml.com/datasets/
Hello, just for anyone interested in this issue. I trained a pruned version of the CNN model in the notebook using TensorFlow optimization toolkit.
The pruned model has a bit higher loss compare with the full model. However, the performance on the test set is about the same.
The training codes are in this repo: https://github.com/Duchstf/hls4ml-RF/tree/dmh/prune-1 Detailed comparison: https://github.com/Duchstf/hls4ml-RF/blob/dmh/prune-1/eval_performance.ipynb
Here is the output when I compare the h5 zip files:
Size of the unpruned model before compression: 5.03 Mb
Size of the unpruned model after compression: 4.35 Mb
Size of the pruned model before compression: 1.69 Mb
Size of the pruned model after compression: 0.33 Mb
I used Javier's code for calculating the percentage pruned ... here is the result (the performance is still the same):
Also just a note on the observation that Tensorflow's pruning code apparently always leaves the model with the right "pruning percentage" we put in for final_sparsity
parameter.
pruning summary
layer conv1/prune_low_magnitude_conv1/kernel:0: 90.0% weights pruned
layer conv2/prune_low_magnitude_conv2/kernel:0: 90.0% weights pruned
layer dense1/prune_low_magnitude_dense1/kernel:0: 90.0% weights pruned
layer dense2/prune_low_magnitude_dense2/kernel:0: 90.0% weights pruned
model: 90.0% weights pruned
Updated notebook with Javier's code: https://github.com/Duchstf/hls4ml-RF/blob/dmh/prune-1/eval_performance.ipynb
Hi All,
A cursory investigation into ML for RF signal processing (hls4ml4rf?) based on Jason's recommendations seems pretty promising. It turns out that DeepSig has several public resources that look helpful to get started:
Datasets: https://www.deepsig.io/datasets a. 2018.01A: Live over-the-air data b. 2016.10a/10b: Simulated datasets, good for initial training
Example notebook for a basic Modulation Recognition application: https://github.com/radioML/examples/blob/master/modulation_recognition/RML2016.10a_VTCNN2_example.ipynb
The implemented network is:
Can we recreate the results here, maybe trim some parameters or discretize, and implement with HLS4ML?
Other related ideas worth mentioning but maybe not as promising: