A suspected memory leak occurred when I ran train.py for the training process with a single GPU.
The usage memory is 95% after a period of time and keeps rising.
Afterwards, I ran a memory analysis through memory_profiler found that there seemed to be over-occupying memory during the load data phase.
Maybe it can provide some suggestions for solutions
I used the same training data (Vimeo90K), and didn't make any major changes to the Vimeo7Dataset class but the following to make it fit my own cache_keys.pkl
My env: OS: Windows10 python version: 3.8 pytorch version: 1.13.1 numpy version: 1.23.5 GPU: RTX3090TI RAM: 32GB
A suspected memory leak occurred when I ran train.py for the training process with a single GPU. The usage memory is 95% after a period of time and keeps rising.
Afterwards, I ran a memory analysis through memory_profiler found that there seemed to be over-occupying memory during the load data phase. Maybe it can provide some suggestions for solutions
I used the same training data (Vimeo90K), and didn't make any major changes to the Vimeo7Dataset class but the following to make it fit my own cache_keys.pkl