Closed riddlegit closed 2 months ago
I encountered this problem. On my Mac M1 Pro there is a memory leak where the Python process memory usage keeps increasing after every new inference. I tried to use torch.mps.empty_cache()
but it didn't solve the problem.
So my temporary fix is that when it runs on Apple Silicon I will simply set the device to cpu
. It works fast enough on it.
I encountered this problem. On my Mac M1 Pro there is a memory leak where the Python process memory usage keeps increasing after every new inference. I tried to use
torch.mps.empty_cache()
but it didn't solve the problem.So my temporary fix is that when it runs on Apple Silicon I will simply set the device to
cpu
. It works fast enough on it.
Thanks
Maybe caused by hardcode release cuda cache only?
https://github.com/myshell-ai/MeloTTS/blob/9ec3cc2a73c93ea1dfc7507ff71cd540c54d62e8/melo/api.py#L123