Open neosr-project opened 11 months ago
Adding a condition to self.update_params() solved the warning:
self.training = training
if self.training is False:
self.eval_conv.weight.requires_grad = False
self.eval_conv.bias.requires_grad = False
self.update_params()
Where self.training
is a bool which gets the value from the options file:
from pathlib import Path
from neosr.utils.options import parse_options
# initialize options parsing
root_path = Path(__file__).parents[2]
opt, args = parse_options(root_path, is_train=True)
# set variable for training mode
if 'train' in opt['datasets']:
training = True
else:
training = False
Hi! Thanks for your work. I ported the network to neosr, however the re-parametrization creates torch warnings, as discussed on issue 16:
Is there any way to prevent it? Setting
self.training
to True doesn't seem to solve the issue, it looks likeeval_conv
is still being sent to the optimizer. Thanks in advance.