Pytorch uses workers to parallelize data. Having more workers increases the memory usage and can cause training processes to be killed.
In a Docker environment where RAM is limited, this can be troublesome. It would be nice to be able to set this parameter with any given value instead on always relying on the number of cores available.
Pytorch uses workers to parallelize data. Having more workers increases the memory usage and can cause training processes to be killed.
In a Docker environment where RAM is limited, this can be troublesome. It would be nice to be able to set this parameter with any given value instead on always relying on the number of cores available.