Open thesps opened 3 years ago
I see you have qconv2d_batchnorm layer which folds the weights of the two layers and then quantizes.
qconv2d_batchnorm
We're bringing support for that to hls4ml, and it should help us save some resources & latency.
I'm wondering, do you plan to add the equivalent combined QDense + BatchNormalization layer to QKeras?
QDense
BatchNormalization
QKeras
@thesps great and glad to see this helps! Yes, QDenseBatchnorm is one of our TODO items, but we have other higher priority tasks at this moment.
https://github.com/google/qkeras/pull/74
I see you have
qconv2d_batchnorm
layer which folds the weights of the two layers and then quantizes.We're bringing support for that to hls4ml, and it should help us save some resources & latency.
I'm wondering, do you plan to add the equivalent combined
QDense
+BatchNormalization
layer toQKeras
?