ZhongshuHou / LSA

Ablation study of local spectral attention (LSA) for full-band speech enhancement (SE)
MIT License
25 stars 6 forks source link

Release pretrained models and information on implementation complexity #2

Open retunelars opened 1 year ago

retunelars commented 1 year ago

Is it possible to release the pretrained models benchmarked in the paper for people to try out themselves? For comparison with other models, it would also be relevant to provide information on the resulting implementation complexity, for example given by the number of multiply-accumulate operations needed per second.

ZhongshuHou commented 1 year ago

Thanks for your suggestions. We will consider releasing pretrained models for individual trials and model complexity will be given for comparison.     ------------------ Original ------------------ From: @.>; Date:  Wed, Mar 22, 2023 10:56 PM To: @.>; Cc: @.***>; Subject:  [ZhongshuHou/LSA] Release pretrained models and information on implementation complexity (Issue #2)

 

Is it possible to release the pretrained models benchmarked in the paper for people to try out themselves? For comparison with other models, it would also be relevant to provide information on the resulting implementation complexity, for example given by the number of multiply-accumulate operations needed per second.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.***>