sign-language-processing / segmentation

Sign language pose segmentation model on both the sentence and sign level
MIT License
1 stars 1 forks source link

High memory usage #3

Open MaithriRao opened 7 months ago

MaithriRao commented 7 months ago

While running python -m sign_language_segmentation.src.train --dataset=dgs_corpus --pose=holistic --fps=25 --hidden_dim=64 --encoder_depth=1 --encoder_bidirectional=false --optical_flow=true --only_optical_flow=true --weighted_loss=false --classes=io as suggested in the README, I was getting an Out Of Memory error while loading the dataset when I had 50GB of memory available but 100GB of available memory were enough. How much memory does loading the dataset take for you? Is it expected to take this much memory?

AmitMY commented 7 months ago

Seems like we are using 128GB of memory (https://github.com/sign-language-processing/segmentation/blob/main/sign_language_segmentation/jobs/job_gpu.sh#L11C2-L11C22) Which I agree is excessive. It is probably that we load the entire dataset and keep it in memory.

I believe optimizations can be made, but this will require further looking-into.