Closed DrWatt closed 3 months ago
Sorry for the delay, but what is the status of this? It looks to me like maybe I can approve and merge it.
I was testing this and the changes work, however the PR includes unnecessary changes and the test doesn't do anything (data is all positive, so it passes even without the changes in the converter). I distilled the changes and made a proper test in https://github.com/vloncar/hls4ml/tree/qrelu_negative_slope
I made #987 as a continuation of this PR
Description
The
quantized_relu
activation layer included in the QKeras library lets the user set thenegative_slope
option which practically change the activation function from the usual ReLU to a LeakyReLU. This change was not perceived by hls4ml, meaning that that information was lost when the HLS_model was created by the library. In order to fix this behaviour I have added the QLeakyReLU in the list of supported layers inmodel/layers.py
using theParametrizedActivation
implementation mimicking theLeakyReLU
already present and so following a similar path of implementation to the "non-leaky"quantized_relu
. Other changes were made tomodel/profiling.py
andutils/config.py
to make them compatible with the new layer.A couple of other fixes were made in
backends/vivado_accelerator/vivado_accelerator_config.py
(a missing "casting" function when comparing complex objects with strings) and inmodel/types.py
(adding theap_
to the hls fixed precision type due to errors raised by the vivado compiler)Type of change
Tests
The
quantized_relu
with anegative_slope
different from the default one was added in the pytest routine covering qkeras' layers. A specific test script has been also added to the pytest directory. The results from the new implementation are asserted to be equal to the QKeras layer with a relative tolerance of 0.00001.Checklist
pre-commit
on the files I edited or added.