Open MaithriRao opened 7 months ago
Seems like we are using 128GB of memory (https://github.com/sign-language-processing/segmentation/blob/main/sign_language_segmentation/jobs/job_gpu.sh#L11C2-L11C22) Which I agree is excessive. It is probably that we load the entire dataset and keep it in memory.
I believe optimizations can be made, but this will require further looking-into.
While running
python -m sign_language_segmentation.src.train --dataset=dgs_corpus --pose=holistic --fps=25 --hidden_dim=64 --encoder_depth=1 --encoder_bidirectional=false --optical_flow=true --only_optical_flow=true --weighted_loss=false --classes=io
as suggested in the README, I was getting an Out Of Memory error while loading the dataset when I had 50GB of memory available but 100GB of available memory were enough. How much memory does loading the dataset take for you? Is it expected to take this much memory?