i run the train.py and the cuda memory just get larger after serveral epochs,and here is the strange thing: if i don't create new batches use the same data, the cuda memory usage stays still,but if i create new batches, the cuda memory get larger.I find out that the pytorch turioal don't run epochs but iterations, so i don't know where is the problem in my code.I need your help....
here is my code
i run the train.py and the cuda memory just get larger after serveral epochs,and here is the strange thing: if i don't create new batches use the same data, the cuda memory usage stays still,but if i create new batches, the cuda memory get larger.I find out that the pytorch turioal don't run epochs but iterations, so i don't know where is the problem in my code.I need your help.... here is my code
train.py
model.py
dataloader.py