Open 7thstorm opened 3 years ago
I have the same problem with my Jetson Nano / GTX 1650 I haven't reached the limit yet but I am guessing I will only be able to train about 30ish faces per person before I run out of memory. The GTX1650 only has 4GB of RAM and I don't have access to an Nvidia card with more.
I have maxed out at 3869MB so far on the GTX. So between teaching people I have to restart the container. Once it's fully loaded it floats around 1181 - 1185MB
is there any update on this?
Just wondering if there has been any progress on the memory issue / memory leak where it doesn't release memory after training in faces?
NOPE afaik deepstack cpu on xeonE6-1650v2 3.5GHz 48 images, 79% of 5GB memory
I think I'm having the same on the Windows/GPU version. I can't tell how much memory is being used per process, either due to a limitation of nvidia-smi on Windows, my card (P400) or some combination. However, there is a definite correlation between GPU memory, and DeepStack entering an unrecoverable (without a restart) state where all requests result in a 100/Timeout error.
Without better logs (https://github.com/johnolafenwa/DeepStack/issues/142) I can't be sure where exactly what is going on, but there's a strong correlation.
I've been tracking troubleshooting and the steps I've taken here: https://ipcamtalk.com/threads/deepstack-gpu-memory-issue-error-100-timeout-after-several-hours.60827/
I trained 4 face images and it's using more than half of the memory on jetson nano. If I train two more images it runs out of memory.
System Specs:
Deepstack process does not release memory after processing.
Memory usage upon starting Deepstack in docker (927mb)
memory @ 3.2 GB after registering 27 faces
This high memory blockage prevents me from performing other functions including registering faces
Either I'm missing something or memory management need some serious improvement. The only way around this is to keep on stopping the docker container after each action Thoughts?