Closed BhaveshDevjani closed 5 years ago
Hi, @bhaveshgg17
I think I forgot to use torch.no_grad()
scope when inferencing. Will you please add that and try it again?
Oh, wait. It must be because of the length of the mixed wav. 5 minutes is too long for VoiceFilter to run at once.
First, I reduced memory usage by half using torch.no_grad()
scope in #6.
However, in order to use VoiceFilter system for long audio, I think we should use some kind of slicing strategy. Since we can't process the whole audio at once, we need to slice it to some pieces and process sequentially (or in batch).
Hello @seungwonpark , Thank you for the fix! Yes, I think we will have to use slicing for this. I will give it try.
i met the same problem. error msg: RuntimeError: CUDA out of memory. Tried to allocate 353.38 MiB (GPU 0; 7.79 GiB total capacity; 6.92 GiB already allocated; 77.56 MiB free; 35.16 MiB cached)
modify batch_size in config.yaml file, it works batch_size=6
I tried to try the trained model on a single input and it gave OOM on GCP with 1 Nvidia P100.
RuntimeError: CUDA out of memory. Tried to allocate 4.66 GiB (GPU 0; 15.90 GiB total capacity; 14.37 GiB already allocated; 889.81 MiB free; 19.21 MiB cached)
The file size of the mixed wav(19 MB) file was about 5 minutes and for reference file was 11 seconds. I don't know why it shows 14.37 GiB allocated when not even training. I tried to restart the instance but it did not help. Can you please suggest a way to reduce the memory required while Inference? Thank you!