Open wendongj opened 1 year ago
Thank you for your interest in our work. The TriU-Net is a multi-channel speech enhancement model in the time-frequency domain. The paper is in submission.
thanks for your reply
The paper is accepted recently by JASA, and it can be found on https://pubs.aip.org/asa/jasa/article/153/6/3378/2897718/Three-stage-hybrid-neural-beamformer-for-multi.
The code and well-trained parameters of the TriU-Net can be found on https://github.com/CaA23187/TriU-Net-module.
If this paper is helpful to you, please cite @article{10.1121/10.0019802, author = {Kuang, Kelan and Yang, Feiran and Li, Junfeng and Yang, Jun}, title = "{Three-stage hybrid neural beamformer for multi-channel speech enhancement}", journal = {The Journal of the Acoustical Society of America}, volume = {153}, number = {6}, pages = {3378-3389}, year = {2023}, month = {06}, doi = {10.1121/10.0019802}, url = {https://doi.org/10.1121/10.0019802}, eprint = {https://pubs.aip.org/asa/jasa/article-pdf/153/6/3378/18009324/3378_1_10.0019802.pdf}, }
The paper is accepted recently by JASA, and it can be found on https://pubs.aip.org/asa/jasa/article/153/6/3378/2897718/Three-stage-hybrid-neural-beamformer-for-multi.
The code and well-trained parameters of the TriU-Net can be found on https://github.com/CaA23187/TriU-Net-module.
If this paper is helpful to you, please cite @Article{10.1121/10.0019802, author = {Kuang, Kelan and Yang, Feiran and Li, Junfeng and Yang, Jun}, title = "{Three-stage hybrid neural beamformer for multi-channel speech enhancement}", journal = {The Journal of the Acoustical Society of America}, volume = {153}, number = {6}, pages = {3378-3389}, year = {2023}, month = {06}, doi = {10.1121/10.0019802}, url = {https://doi.org/10.1121/10.0019802}, eprint = {https://pubs.aip.org/asa/jasa/article-pdf/153/6/3378/18009324/3378_1_10.0019802.pdf}, }
really thanks for your kind reply
I see your result, the result is better than other models especially in pub noise, other model have speech distortion, TriU-Net seems have some gains on harmonic part than clean speech, the harmonic part is more clear than clean speech, this is amazing, seems this is multi-channel speech enhancement model? hope can see the paper later, if possible, wish to see the code, thanks for your sharing of result