jfzhang95 / pytorch-deeplab-xception

DeepLab v3+ model in PyTorch. Support different backbones.
MIT License
2.89k stars 779 forks source link

The memory consumption increases with the number of experiments #202

Open Mnsy-Syl opened 3 years ago

Mnsy-Syl commented 3 years ago

The first time the whole training process can be completed when the batch_size is 8 and the worker is 1. When I started a new training, the memory suddenly increased a lot, and finally a CUDA out of memory error was generated. Even if I put the batch_size Adjusting to 2 will only slow down the training process. I got this result on both the colab and the local host. Does anyone know why?