Closed DamRsn closed 1 year ago
Merging #89 (5ece924) into main (c521243) will decrease coverage by
0.52%
. The diff coverage is91.20%
.
@@ Coverage Diff @@
## main #89 +/- ##
==========================================
- Coverage 96.06% 95.55% -0.52%
==========================================
Files 36 42 +6
Lines 2872 3237 +365
==========================================
+ Hits 2759 3093 +334
- Misses 113 144 +31
Impacted Files | Coverage Δ | |
---|---|---|
RTNeural/batchnorm/batchnorm_eigen.tpp | 100.00% <ø> (ø) |
|
RTNeural/model_loader.h | 81.62% <72.94%> (-3.12%) |
:arrow_down: |
RTNeural/batchnorm/batchnorm2d_eigen.h | 86.95% <86.95%> (ø) |
|
...Neural/conv1d_stateless/conv1d_stateless_eigen.tpp | 94.28% <94.28%> (ø) |
|
RTNeural/ModelT.h | 90.32% <96.15%> (+1.61%) |
:arrow_up: |
RTNeural/conv1d_stateless/conv1d_stateless_eigen.h | 96.15% <96.15%> (ø) |
|
RTNeural/conv2d/conv2d_eigen.h | 97.67% <97.67%> (ø) |
|
RTNeural/Model.h | 100.00% <100.00%> (ø) |
|
RTNeural/batchnorm/batchnorm2d_eigen.tpp | 100.00% <100.00%> (ø) |
|
RTNeural/batchnorm/batchnorm_eigen.h | 75.00% <100.00%> (+1.66%) |
:arrow_up: |
... and 4 more |
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
Thanks for the PR, at a glance these changes look great! Since there's a lot of changes here, it's going to take me a minute to review them thoroughly, but I just wanted to let you know that I am looking at it.
Supporting only an Eigen implementation is fine for now, but in order to stay compatible with the other backends, it would probably make sense to add some #if RTNEURAL_USE_EIGEN
gaurds to the "main" headers (e.g. batchnorm2d.h
), and for the relevant tests. That should also help the CI jobs to pass.
Cool thanks!
I've just added a few #if RTNEURAL_USE_EIGEN
so that other backend can compile
Hi Jatin,
I have implemented two new layers, 2d streaming convolution and 2d batch normalization, for a project that I am currently working on for the Neural Audio Plugin Competition. These layers are designed specifically for neural networks that process frequency domain data or similar. When those layers are used, the model runs on every frame, which contains a certain number of frequency bins, instead of every individual sample.
At the moment, I have only implemented these layers using the Eigen backend.
Please find below some details of the implementation. Let me know what you think!
The chosen streaming approach works by performing all necessary calculations involving a frame as soon as it is acquired, and then storing the obtained results in the layer states. These states are utilized to generate the accurate output.
Implementation of Batchnorm 2D with Eigen. Works very similarly as batchnorm1d, but now the channels consist of more than 1 value. But the number of weights stays the same. Only working when
axis=-1
in TF/Keras.Minimal changes to original API:
in_size
andout_size
are still used but set asnum_filters_in * num_features_in
andnum_filters_out * num_features_out
respectively.Test coverage for new layers, similar to other tests with some minor modifications to deal with frame alignment issues with different kind of paddings.