sp-uhh / storm

StoRM: A Diffusion-based Stochastic Regeneration Model for Speech Enhancement and Dereverberation
MIT License
164 stars 22 forks source link

Any way to speed up storm inference #12

Closed adeelabbas closed 10 months ago

adeelabbas commented 1 year ago

Hi, I am getting very slow inference performance on RTX 3090 (less than 1x real time). Wondering if there is any way to speed up the algorithm while keeping decent quality (quality that's still better than denoiser-only mode)?

jmlemercier commented 1 year ago

Hi, The inference with 20 steps and no corrector (as recommended in the paper) should be rather quick on GPU, definitely faster than real-time. If you are looking for schemes to improve the inference speed, I'll suggest to wait two weeks, we have something in the making, or you can take a look at the existing literature on fast inference for diffusion-based generative models: most of them should be compatible with StoRM.

egorsmkv commented 1 year ago

@jmlemercier hi, did you release the second solution to the problem?

jmlemercier commented 1 year ago

Hi @egorsmkv this is actually ongoing work, should be released sometime around September!

wangtiance commented 1 year ago

Hi @jmlemercier , I'm also experiencing very slow inference with GPU. Is the September update going on as planned?

jmlemercier commented 11 months ago

Hi @egorsmkv @wangtiance, again, we did not experience very slow inference on GPU with the settings provided (it worked in real-time with 20 steps as mentioned above). WRT few-step diffusion, I suggest you take a look at our recent contributions on the matter:

egorsmkv commented 11 months ago

@jmlemercier thank you for the update!