Open dzz416 opened 2 years ago
You can reduce the training patch size or design a lighter model instead.
You can reduce the training patch size or design a lighter model instead.
it seems that is not the lack of memory:
21-12-07 21:38:19.848 - INFO: Model [VideoSRModel] is created.
21-12-07 21:38:19.848 - INFO: Start training from epoch: 0, iter: 0
/home/dzz/.conda/envs/realvsr/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:134: UserWarning: Detected call of lr_scheduler.step()
before optimizer.step()
. In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step()
before lr_scheduler.step()
. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
Traceback (most recent call last):
File "codes/train.py", line 339, in
i m runing edvr_realvsr_notsa_split.yml
what should i do
I only have one titan12G, batchsize is set to 1, but I still get an error, what should I do?