victorca25 / traiNNer

traiNNer: Deep learning framework for image and video super-resolution, restoration and image-to-image translation, for training and testing.
Apache License 2.0
289 stars 40 forks source link

TypeError: 'NoneType' object cannot be interpreted as an integer #26

Closed keywae closed 3 years ago

keywae commented 3 years ago

Hello. I am training on Colab and i get following error.

export CUDA_VISIBLE_DEVICES=0 20-12-30 03:59:48.658 - INFO: name: ftrainer use_tb_logger: False model: srragan scale: 8 batch_multiplier: 1 gpu_ids: [0] datasets:[ train:[ name: Dataset mode: LRHROTF dataroot_HR: ['/content/datasets/set0/train/hr', '/content/datasets/set1/train/hr', '/content/datasets/set2/train/hr'] dataroot_LR: ['/content/datasets/set0/train/lr', '/content/datasets/set1/train/lr', '/content/datasets/set2/train/lr'] subset_file: None use_shuffle: True n_workers: 4 batch_size: 100 HR_size: 128 phase: train scale: 8 data_type: img virtual_batch_size: 100 ] val:[ name: Validation mode: LRHROTF dataroot_HR: ['/content/datasets/set0/val/hr', '/content/datasets/set1/val/hr', '/content/datasets/set2/val/hr'] dataroot_LR: ['/content/datasets/set0/val/lr', '/content/datasets/set1/val/lr', '/content/datasets/set2/val/lr'] phase: val scale: 8 data_type: img ] ] path:[ root: /content/BasicSR/ pretrain_model_G: ../experiments/pretrained_models/Restart.pth experiments_root: /content/BasicSR/experiments/ftrainer models: /content/BasicSR/experiments/ftrainer/models training_state: /content/BasicSR/experiments/ftrainer/training_state log: /content/BasicSR/experiments/ftrainer val_images: /content/BasicSR/experiments/ftrainer/val_images ] network_G:[ which_model_G: RRDB_net norm_type: None mode: CNA nf: 64 nb: 23 in_nc: 3 out_nc: 3 gc: 32 group: 1 convtype: Conv2D net_act: leakyrelu scale: 8 ] network_D:[ which_model_D: discriminator_vgg norm_type: batch act_type: leakyrelu mode: CNA nf: 64 in_nc: 3 ] train:[ lr_G: 0.0001 lr_D: 0.0001 use_frequency_separation: False lr_scheme: MultiStepLR lr_steps: [50000, 100000, 200000, 300000] lr_gamma: 0.5 pixel_criterion: l1 pixel_weight: 0.01 feature_criterion: l1 feature_weight: 1 gan_type: vanilla gan_weight: 0.005 manual_seed: 0 niter: 500000.0 val_freq: 100 overwrite_val_imgs: None val_comparison: None ] logger:[ print_freq: 100 save_checkpoint_freq: 100.0 backup_freq: 100 overwrite_chkp: None ] is_train: True

20-12-30 03:59:48.658 - INFO: Random seed: 0 20-12-30 03:59:48.716 - INFO: Dataset [LRHRDataset - Dataset] is created. 20-12-30 03:59:48.716 - INFO: Number of train images: 1,307, iters: 14 20-12-30 03:59:48.716 - INFO: Total epochs needed: 35715 for iters 500,000 20-12-30 03:59:48.719 - INFO: Dataset [LRHRDataset - Validation] is created. 20-12-30 03:59:48.719 - INFO: Number of val images in [Validation]: 358 20-12-30 03:59:48.752 - INFO: AMP library available Traceback (most recent call last): File "train.py", line 256, in main() File "train.py", line 98, in main model = create_model(opt) File "/content/BasicSR/codes/models/init.py", line 26, in create_model m = M(opt) File "/content/BasicSR/codes/models/SRRaGAN_model.py", line 51, in init self.netG = networks.define_G(opt).to(self.device) # G File "/content/BasicSR/codes/models/networks.py", line 160, in define_G finalact=opt_net['finalact'], gaussian_noise=opt_net['gaussian'], plus=opt_net['plus'], nr=opt_net['nr']) File "/content/BasicSR/codes/models/modules/architectures/RRDBNet_arch.py", line 26, in init gaussian_noise=gaussiannoise, plus=plus) for in range(nb)] File "/content/BasicSR/codes/models/modules/architectures/RRDBNet_arch.py", line 26, in gaussian_noise=gaussiannoise, plus=plus) for in range(nb)] File "/content/BasicSR/codes/models/modules/architectures/RRDBNet_arch.py", line 86, in init gaussian_noise=gaussiannoise, plus=plus) for in range(nr)] TypeError: 'NoneType' object cannot be interpreted as an integer

victorca25 commented 3 years ago

Hello! Please add the variable nr = 3 in the configuration file, in the generation options: https://github.com/victorca25/BasicSR/blob/84be3f88c5612df24aa12a0ca8e1953381f15e6a/codes/options/train/train_template.yml#L108

Will fix this in a new commit in a while.

victorca25 commented 3 years ago

The fix has been commited, so now you can also update to the latest version of the code. Let me know how it goes!

keywae commented 3 years ago

Gives the following Error:

Traceback (most recent call last): File "train.py", line 256, in main() File "train.py", line 98, in main model = create_model(opt) File "/content/BasicSR/codes/models/init.py", line 26, in create_model m = M(opt) File "/content/BasicSR/codes/models/SRRaGAN_model.py", line 125, in init self.cri_gan, self.netD, self.netG, train_opt, logger, self.optimizers) File "/content/BasicSR/codes/models/optimizers.py", line 27, in get_optimizers weight_decay=wd_G, betas=(train_opt['beta1_G'], 0.999)) File "/usr/local/lib/python3.6/dist-packages/torch/optim/adam.py", line 40, in init if not 0.0 <= betas[0] < 1.0: TypeError: '<=' not supported between instances of 'float' and 'NoneType'

Here is my Generator config:

Generator:

network_G: which_model_G: 'RRDB_net' norm_type: null mode: 'CNA' nf: 64 nb: 23 in_nc: 3 nr: 3 out_nc: 3 gc: 32 group: 1 convtype: 'Conv2D' net_act: 'leakyrelu'

victorca25 commented 3 years ago

Check that the scheduler options are complete, should look like this:

Schedulers options:

train: lr_G: 0.001 weight_decay_G: 0 beta1_G: 0.9 lr_D: 0.001 weight_decay_D: 0 beta1_D: 0.9

# For MultiStepLR (ESRGAN, default):
lr_scheme: MultiStepLR
lr_steps: [80000, 160000] 
lr_gamma: 0.5 
keywae commented 3 years ago

OK, that did it. Thanks.