slp-rl / aero

This repo contains the official PyTorch implementation of "Audio Super Resolution in the Spectral Domain" (ICASSP 2023)
MIT License
190 stars 24 forks source link

Is it possible to reduce GPU memory usage during inference? #19

Open tshmak opened 8 months ago

tshmak commented 8 months ago

It appears it is taking >30GB of memory for inference. What are the parameters I can set which can reduce the demand?

Thanks!