Open TienVTK opened 2 years ago
That's not really how CUDA allocation works.. you don't need to free things manually; when the process dies it is automatically freed. If it were not all freed, it would be a bug in the CUDA drivers.
I don't think it's good idea. My problem is memory leak (RAM). I understand clearly that memory is freed when process die. In my case, we build kaldi like as a server, we need a long-live process from server side where serv many request from users. And my expectation is that kaldi could free memory (RAM) when request is done.
If memory usage is constantly growing, that is a problem and should be addressed. If not all things are freed at program exit, but they don't grow in an unlimited way, that doesn't really affect users so it wouldn't be a priority to fix. You'd need to be more specific about the problem.
Here is all of my steps
I try to reduce memory leak by reset corr_id (corr_id = corr_id % MAX_CORR_ID) RAM usage increase but it is in an limited way.
I want to ask "the way to fix mem-leak like that makes other errors?" or other suggestion to fix this problem?
Many thanks,
We use online code in a server, you can try with a docker:
https://hub.docker.com/repository/docker/alphacep/kaldi-vosk-server-gpu
it is stable, no leaks. There should be leaks somewhere else. It is not easy to use the code properly though.
Could you share vosk source? We need to natively build and deploy because of our feature.
Many thanks.
Vào 17:52, T.5, 14 Th4, 2022 Nickolay V. Shmyrev @.***> đã viết:
We use online code in a server, you can try with a docker:
https://hub.docker.com/repository/docker/alphacep/kaldi-vosk-server-gpu
it is stable, no leaks. There should be leaks somewhere else. It is not easy to use the code properly though.
— Reply to this email directly, view it on GitHub https://github.com/kaldi-asr/kaldi/issues/4723#issuecomment-1099062550, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADUSF7WZ27SZIMV5J446DL3VE72FRANCNFSM5S32S6XQ . You are receiving this because you authored the thread.Message ID: @.***>
This issue has been automatically marked as stale by a bot solely because it has not had recent activity. Please add any comment (simply 'ping' is enough) to prevent the issue from being closed for 60 more days if you believe it should be kept open.
Hi all
I have an issue about memory leak when I run kaldi code on RTX 3090 Memory didn't free when all processs done (done decode kaldi). No code modification with my experiment. Demo source: https://github.com/kaldi-asr/kaldi/blob/master/src/cudadecoderbin/batched-wav-nnet3-cuda-online.cc
Any idea about this problem?