I meet a question that the train process can only run when the num_workers of Dataloader is 0. The memory is 32G. Ubuntu20.04. Pytorch1.10. I have found that the memory usage is gradually increasing during loading grading labels, as well as loading data path and consolidation labels, due to insufficient memory, resulting in num_ Must workers be 0 to run? When this parameter is 0, the training is slow. I hope someone can help. #67
I use a computer with 32GB of memory and 32GB of swap space. Do I need larger memory to support training?