Closed QaisarRajput closed 1 year ago
Dear @QaisarRajput ,
just to clarify: while the error is thrown by torch, the shared memory does not relate to the available VRAM (of the GPU) but to the RAM (of the CPU). I was able to run the container with 12GB of shared memory as well, maybe there is no space left on your harddrive/ssd? (note: the error does not refer to missing VRAM, that would be an CUDA out of memory error)
Best, Michael
Since there was no update for some time I’ll close this Issue for now. Please feel free to reopen this Issue if the problem persists or open a new one.
Hi, Thanks for the amazing repo. we are using this on our own dataset, where the infrastructure we have cannot allow for shared memory beyond a certain point (i.e. 12GB). I am getting below error. Is there a way we can reduce this need by reducing multiprocessing parameter or something. For example reducing the concurrent processing. I am not sure the if
env_det_num_threads=6
as that control CPU usage and this is related to torch (GPU). Please correct if there is a lack in understanding in my part.