fastmachinelearning / hls4ml

Machine learning on FPGAs using HLS
https://fastmachinelearning.org/hls4ml
Apache License 2.0
1.18k stars 388 forks source link

Fix loading weights in GarNetStacked and GarNet internal array precisions #827

Closed joshlerner closed 1 year ago

joshlerner commented 1 year ago

Description

In the change from using the reader to storing weights as attributes, GarNetStack input feature weights + biases and output feature weights were missed. Fixed by storing all GarNetStack weights/biases as attributes.

All non default precisions specified for internal GarNet arrays (edge weight, norm, etc.) were not converted to CPP definitions, and produced typedef errors in firmware/parameters.h. Fixed by applying an APTypeConverter to all internal array precisions, not just those with default values.

Modified contrib/garnet.py to include an output activation for GarNetStack models, which was necessary to test above changes. This had previously been commented out due to being unused.

Type of change

Tests

Added test similar to pre-existing in test_garnet.py for GarNetStack models

  • GarNet internal arrays are included in the generated config automatically for name and type granularity
  • Previous tests overwrote these specifications, but the new test includes non-default internal arrays in the config

Checklist

joshlerner commented 1 year ago

Hello! I have been working on a GarNet calibration model to be run on an FPGA using hls4ml. This is my first pull request, so some of the formalities are new to me. Thank you for your patience!