Closed shuchitak closed 2 years ago
I've also run the pipeline tests comparing Avona AEC and AGC performance with the py_aec + py_agc python pipeline. I changed the following in the Avona pipeline for running this experiment:
When comparing the Avona vs Python keyword scores for this reduced pipeline, the Avona results match python exactly. I tested this for the Amazon recordings test suite as well and get the same results. Based on this I'm feeling confident about there being a high degree of similarity between Avona AEC and py_aec.
Avona and python pipeline results for the hydra_audio/xvf3510_no_processing_xmos_test_suite test set. Avona pipeline: results_Avona_prev_arch_xcore.csv Python pipeline: results_Avona_prev_arch_python.csv
Avona and python pipeline results for the hydra_audio/xvf3510_3610_ffrs_test/recordings/xvf3610_v5_0_0_packed_recordings/20210902_xvf3610_v5_0_0_rerun_packed_1/unpacked/ test set. Avona pipeline: results_Avona_prev_arch_xcore_new_set_1.csv Python pipeline: results_Avona_prev_arch_python_new_set_1.csv
KWD scorewise Avona AEC seems identical to python. Further testing against python will involve rewriting AEC unit tests to compare against the python model. This will be tracked in a different issue.
I discovered a few differences between Avona and python AEC implementation and fixes for those are in PR #341. Pipeline results comparing head of develop of Avona to PR #341 are uploaded as comments in the PR itself. @athapapa and I took a look at the results and they look pretty similar to Avona head of develop and we think it's okay to go head and merge this change.