andreas128 / SRFlow

Official SRFlow training code: Super-Resolution using Normalizing Flow in PyTorch
Other
826 stars 112 forks source link

RuntimeError: Error(s) in loading state_dict for SRFlowNet: #14

Closed flybiubiu closed 3 years ago

flybiubiu commented 3 years ago

Thx author again!I have train x4 nettwork and then train x8 network. source myenv/bin/activate
cd code python train.py -opt ./confs/SRFlow_DF2K_4X.yml
python train.py -opt ./confs/SRFlow_DF2K_8X.yml
The wrong is that:

(myenv) (base) ubuntu@ubuntu:~/Desktop/data1024/SRFlow-master/code$ python train.py -opt ./confs/SRFlow_DF2K_8X.yml OrderedDict([('manual_seed', 10), ('lr_G', 0.0005), ('weight_decay_G', 0), ('beta1', 0.9), ('beta2', 0.99), ('lr_scheme', 'MultiStepLR'), ('warmup_iter', -1), ('lr_steps_rel', [0.5, 0.75, 0.9, 0.95]), ('lr_gamma', 0.5), ('niter', 200000), ('val_freq', 40000), ('lr_steps', [100000, 150000, 180000, 190000])]) Disabled distributed training. pretrain_model path will be ignored when resuming training. 21-01-27 01:59:25.817 - INFO: name: train use_tb_logger: True model: SRFlow distortion: sr scale: 8 gpu_ids: [0] datasets:[ train:[ name: CelebA_160_tr mode: LRHR_PKL dataroot_GT: ../datasets/DF2K-tr.pklv4 dataroot_LQ: ../datasets/DF2K-tr_X8.pklv4 quant: 32 use_shuffle: True n_workers: 3 batch_size: 16 GT_size: 160 use_flip: True color: RGB phase: train scale: 8 data_type: img ] val:[ name: CelebA_160_va mode: LRHR_PKL dataroot_GT: ../datasets/DIV2K-va.pklv4 dataroot_LQ: ../datasets/DIV2K-va_X8.pklv4 quant: 32 n_max: 20 phase: val scale: 8 data_type: img ] ] dataroot_GT: ../datasets/div2k-validation-modcrop8-gt dataroot_LR: ../datasets/div2k-validation-modcrop8-x8 model_path: ../pretrained_models/SRFlow_DF2K_8X.pth heat: 0.9 network_G:[ which_model_G: SRFlowNet in_nc: 3 out_nc: 3 nf: 64 nb: 23 upscale: 8 train_RRDB: False train_RRDB_delay: 0.5 flow:[ K: 16 L: 4 noInitialInj: True coupling: CondAffineSeparatedAndCond additionalFlowNoAffine: 2 split:[ enable: True ] fea_up0: True stackRRDB:[ blocks: [1, 3, 5, 7] concat: True ] ] scale: 8 ] path:[ pretrain_model_G: /home/ubuntu/Desktop/data1024/SRFlow-master/experiments/train/models/200000_G.pth strict_load: True resume_state: auto root: /home/ubuntu/Desktop/data1024/SRFlow-master experiments_root: /home/ubuntu/Desktop/data1024/SRFlow-master/experiments/train models: /home/ubuntu/Desktop/data1024/SRFlow-master/experiments/train/models training_state: /home/ubuntu/Desktop/data1024/SRFlow-master/experiments/train/training_state log: /home/ubuntu/Desktop/data1024/SRFlow-master/experiments/train val_images: /home/ubuntu/Desktop/data1024/SRFlow-master/experiments/train/val_images ] train:[ manual_seed: 10 lr_G: 0.0005 weight_decay_G: 0 beta1: 0.9 beta2: 0.99 lr_scheme: MultiStepLR warmup_iter: -1 lr_steps_rel: [0.5, 0.75, 0.9, 0.95] lr_gamma: 0.5 niter: 200000 val_freq: 40000 lr_steps: [100000, 150000, 180000, 190000] ] val:[ heats: [0.0, 0.5, 0.75, 1.0] n_sample: 3 ] test:[ heats: [0.0, 0.7, 0.8, 0.9] ] logger:[ print_freq: 100 save_checkpoint_freq: 1000.0 ] is_train: True dist: False

21-01-27 01:59:25.867 - INFO: Random seed: 10 {'name': 'CelebA_160_tr', 'mode': 'LRHR_PKL', 'dataroot_GT': '../datasets/DF2K-tr.pklv4', 'dataroot_LQ': '../datasets/DF2K-tr_X8.pklv4', 'quant': 32, 'use_shuffle': True, 'n_workers': 3, 'batch_size': 16, 'GT_size': 160, 'use_flip': True, 'color': 'RGB', 'phase': 'train', 'scale': 8, 'data_type': 'img'} Loaded 162150 HR images with [0.00, 255.00] in 15.97s from ../datasets/DF2K-tr.pklv4 Loaded 162150 LR images with [0.00, 255.00] in 15.97s from ../datasets/DF2K-tr_X8.pklv4 21-01-27 01:59:41.838 - INFO: Dataset [LRHR_PKLDataset - CelebA_160_tr] is created. Dataset created 21-01-27 01:59:41.845 - INFO: Number of train images: 162,150, iters: 10,135 21-01-27 01:59:41.845 - INFO: Total epochs needed: 20 for iters 200,000 {'name': 'CelebA_160_va', 'mode': 'LRHR_PKL', 'dataroot_GT': '../datasets/DIV2K-va.pklv4', 'dataroot_LQ': '../datasets/DIV2K-va_X8.pklv4', 'quant': 32, 'n_max': 20, 'phase': 'val', 'scale': 8, 'data_type': 'img'} Loaded 20 HR images with [0.00, 255.00] in 0.87s from ../datasets/DIV2K-va.pklv4 Loaded 20 LR images with [0.00, 255.00] in 0.87s from ../datasets/DIV2K-va_X8.pklv4 21-01-27 01:59:42.711 - INFO: Dataset [LRHR_PKLDataset - CelebA_160_va] is created. 21-01-27 01:59:42.711 - INFO: Number of val images in [CelebA_160_va]: 20 21-01-27 01:59:45.179 - INFO: Network G structure: DataParallel - SRFlowNet, with parameters: 50,821,891 21-01-27 01:59:45.180 - INFO: SRFlowNet( (RRDB): RRDBNet( (conv_first): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (RRDB_trunk): Sequential( (0): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (1): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (2): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (3): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (4): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (5): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (6): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (7): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (8): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (9): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (10): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (11): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (12): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (13): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (14): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (15): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (16): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (17): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (18): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (19): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (20): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (21): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) (22): RRDB( (RDB1): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB2): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (RDB3): ResidualDenseBlock_5C( (conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv3): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv4): Conv2d(160, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv5): Conv2d(192, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) ) ) (trunk_conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (upconv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (upconv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (upconv3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (HRconv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv_last): Conv2d(64, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (lrelu): LeakyReLU(negative_slope=0.2, inplace=True) ) (flowUpsamplerNet): FlowUpsamplerNet( (layers): ModuleList( (0): SqueezeLayer() (1): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() ) (2): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() ) (3): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (4): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (5): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (6): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (7): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (8): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (9): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (10): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (11): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (12): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (13): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (14): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (15): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (16): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (17): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (18): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 326, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (19): Split2d( (conv): Conv2dZeros(6, 12, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (20): SqueezeLayer() (21): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() ) (22): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() ) (23): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (24): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (25): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (26): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (27): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (28): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (29): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (30): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (31): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (32): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (33): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (34): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (35): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (36): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (37): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (38): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 332, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (39): Split2d( (conv): Conv2dZeros(12, 24, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (40): SqueezeLayer() (41): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() ) (42): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() ) (43): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (44): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (45): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (46): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (47): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (48): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (49): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (50): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (51): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (52): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (53): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (54): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (55): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (56): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (57): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (58): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 344, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 48, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 96, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (59): SqueezeLayer() (60): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() ) (61): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() ) (62): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (63): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (64): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (65): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (66): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (67): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (68): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (69): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (70): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (71): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (72): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (73): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (74): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (75): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (76): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) (77): FlowStep( (actnorm): ActNorm2d() (invconv): InvertibleConv1x1() (affine): CondAffineSeparatedAndCond( (fAffine): Sequential( (0): Conv2d( 416, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 192, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) (fFeatures): Sequential( (0): Conv2d( 320, 64, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1], bias=False (actnorm): ActNorm2d() ) (1): ReLU() (2): Conv2d( 64, 64, kernel_size=[1, 1], stride=[1, 1], padding=[0, 0], bias=False (actnorm): ActNorm2d() ) (3): ReLU() (4): Conv2dZeros(64, 384, kernel_size=[3, 3], stride=[1, 1], padding=[1, 1]) ) ) ) ) (f): Sequential( (0): Conv2d(320, 96, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1)) ) ) ) **Traceback (most recent call last): File "train.py", line 324, in main() File "train.py", line 158, in main model = create_model(opt, current_step) File "/home/ubuntu/Desktop/data1024/SRFlow-master/code/models/init.py", line 50, in create_model m = M(opt, step) File "/home/ubuntu/Desktop/data1024/SRFlow-master/code/models/SRFlow_model.py", line 58, in init self.load() File "/home/ubuntu/Desktop/data1024/SRFlow-master/code/models/SRFlow_model.py", line 267, in load self.load_network(get_resume_model_path, self.netG, strict=True, submodule=None) File "/home/ubuntu/Desktop/data1024/SRFlow-master/code/models/base_model.py", line 124, in load_network network.load_state_dict(load_net_clean, strict=strict) File "/home/ubuntu/Desktop/data1024/SRFlow-master/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for SRFlowNet: Missing key(s) in state_dict: "RRDB.upconv3.weight", "RRDB.upconv3.bias", "flowUpsamplerNet.layers.39.conv.weight", "flowUpsamplerNet.layers.39.conv.bias", "flowUpsamplerNet.layers.39.conv.logs", "flowUpsamplerNet.layers.58.actnorm.bias", "flowUpsamplerNet.layers.58.actnorm.logs", "flowUpsamplerNet.layers.58.invconv.weight", "flowUpsamplerNet.layers.58.affine.fAffine.0.weight", "flowUpsamplerNet.layers.58.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.58.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.58.affine.fAffine.2.weight", "flowUpsamplerNet.layers.58.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.58.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.58.affine.fAffine.4.weight", "flowUpsamplerNet.layers.58.affine.fAffine.4.bias", "flowUpsamplerNet.layers.58.affine.fAffine.4.logs", "flowUpsamplerNet.layers.58.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.58.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.58.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.58.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.58.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.58.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.58.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.58.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.58.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.60.actnorm.bias", "flowUpsamplerNet.layers.60.actnorm.logs", "flowUpsamplerNet.layers.60.invconv.weight", "flowUpsamplerNet.layers.61.actnorm.bias", "flowUpsamplerNet.layers.61.actnorm.logs", "flowUpsamplerNet.layers.61.invconv.weight", "flowUpsamplerNet.layers.62.actnorm.bias", "flowUpsamplerNet.layers.62.actnorm.logs", "flowUpsamplerNet.layers.62.invconv.weight", "flowUpsamplerNet.layers.62.affine.fAffine.0.weight", "flowUpsamplerNet.layers.62.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.62.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.62.affine.fAffine.2.weight", "flowUpsamplerNet.layers.62.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.62.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.62.affine.fAffine.4.weight", "flowUpsamplerNet.layers.62.affine.fAffine.4.bias", "flowUpsamplerNet.layers.62.affine.fAffine.4.logs", "flowUpsamplerNet.layers.62.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.62.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.62.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.62.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.62.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.62.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.62.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.62.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.62.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.63.actnorm.bias", "flowUpsamplerNet.layers.63.actnorm.logs", "flowUpsamplerNet.layers.63.invconv.weight", "flowUpsamplerNet.layers.63.affine.fAffine.0.weight", "flowUpsamplerNet.layers.63.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.63.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.63.affine.fAffine.2.weight", "flowUpsamplerNet.layers.63.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.63.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.63.affine.fAffine.4.weight", "flowUpsamplerNet.layers.63.affine.fAffine.4.bias", "flowUpsamplerNet.layers.63.affine.fAffine.4.logs", "flowUpsamplerNet.layers.63.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.63.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.63.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.63.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.63.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.63.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.63.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.63.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.63.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.64.actnorm.bias", "flowUpsamplerNet.layers.64.actnorm.logs", "flowUpsamplerNet.layers.64.invconv.weight", "flowUpsamplerNet.layers.64.affine.fAffine.0.weight", "flowUpsamplerNet.layers.64.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.64.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.64.affine.fAffine.2.weight", "flowUpsamplerNet.layers.64.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.64.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.64.affine.fAffine.4.weight", "flowUpsamplerNet.layers.64.affine.fAffine.4.bias", "flowUpsamplerNet.layers.64.affine.fAffine.4.logs", "flowUpsamplerNet.layers.64.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.64.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.64.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.64.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.64.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.64.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.64.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.64.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.64.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.65.actnorm.bias", "flowUpsamplerNet.layers.65.actnorm.logs", "flowUpsamplerNet.layers.65.invconv.weight", "flowUpsamplerNet.layers.65.affine.fAffine.0.weight", "flowUpsamplerNet.layers.65.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.65.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.65.affine.fAffine.2.weight", "flowUpsamplerNet.layers.65.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.65.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.65.affine.fAffine.4.weight", "flowUpsamplerNet.layers.65.affine.fAffine.4.bias", "flowUpsamplerNet.layers.65.affine.fAffine.4.logs", "flowUpsamplerNet.layers.65.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.65.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.65.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.65.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.65.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.65.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.65.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.65.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.65.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.66.actnorm.bias", "flowUpsamplerNet.layers.66.actnorm.logs", "flowUpsamplerNet.layers.66.invconv.weight", "flowUpsamplerNet.layers.66.affine.fAffine.0.weight", "flowUpsamplerNet.layers.66.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.66.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.66.affine.fAffine.2.weight", "flowUpsamplerNet.layers.66.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.66.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.66.affine.fAffine.4.weight", "flowUpsamplerNet.layers.66.affine.fAffine.4.bias", "flowUpsamplerNet.layers.66.affine.fAffine.4.logs", "flowUpsamplerNet.layers.66.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.66.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.66.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.66.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.66.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.66.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.66.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.66.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.66.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.67.actnorm.bias", "flowUpsamplerNet.layers.67.actnorm.logs", "flowUpsamplerNet.layers.67.invconv.weight", "flowUpsamplerNet.layers.67.affine.fAffine.0.weight", "flowUpsamplerNet.layers.67.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.67.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.67.affine.fAffine.2.weight", "flowUpsamplerNet.layers.67.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.67.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.67.affine.fAffine.4.weight", "flowUpsamplerNet.layers.67.affine.fAffine.4.bias", "flowUpsamplerNet.layers.67.affine.fAffine.4.logs", "flowUpsamplerNet.layers.67.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.67.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.67.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.67.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.67.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.67.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.67.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.67.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.67.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.68.actnorm.bias", "flowUpsamplerNet.layers.68.actnorm.logs", "flowUpsamplerNet.layers.68.invconv.weight", "flowUpsamplerNet.layers.68.affine.fAffine.0.weight", "flowUpsamplerNet.layers.68.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.68.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.68.affine.fAffine.2.weight", "flowUpsamplerNet.layers.68.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.68.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.68.affine.fAffine.4.weight", "flowUpsamplerNet.layers.68.affine.fAffine.4.bias", "flowUpsamplerNet.layers.68.affine.fAffine.4.logs", "flowUpsamplerNet.layers.68.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.68.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.68.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.68.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.68.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.68.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.68.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.68.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.68.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.69.actnorm.bias", "flowUpsamplerNet.layers.69.actnorm.logs", "flowUpsamplerNet.layers.69.invconv.weight", "flowUpsamplerNet.layers.69.affine.fAffine.0.weight", "flowUpsamplerNet.layers.69.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.69.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.69.affine.fAffine.2.weight", "flowUpsamplerNet.layers.69.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.69.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.69.affine.fAffine.4.weight", "flowUpsamplerNet.layers.69.affine.fAffine.4.bias", "flowUpsamplerNet.layers.69.affine.fAffine.4.logs", "flowUpsamplerNet.layers.69.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.69.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.69.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.69.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.69.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.69.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.69.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.69.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.69.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.70.actnorm.bias", "flowUpsamplerNet.layers.70.actnorm.logs", "flowUpsamplerNet.layers.70.invconv.weight", "flowUpsamplerNet.layers.70.affine.fAffine.0.weight", "flowUpsamplerNet.layers.70.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.70.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.70.affine.fAffine.2.weight", "flowUpsamplerNet.layers.70.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.70.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.70.affine.fAffine.4.weight", "flowUpsamplerNet.layers.70.affine.fAffine.4.bias", "flowUpsamplerNet.layers.70.affine.fAffine.4.logs", "flowUpsamplerNet.layers.70.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.70.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.70.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.70.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.70.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.70.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.70.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.70.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.70.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.71.actnorm.bias", "flowUpsamplerNet.layers.71.actnorm.logs", "flowUpsamplerNet.layers.71.invconv.weight", "flowUpsamplerNet.layers.71.affine.fAffine.0.weight", "flowUpsamplerNet.layers.71.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.71.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.71.affine.fAffine.2.weight", "flowUpsamplerNet.layers.71.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.71.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.71.affine.fAffine.4.weight", "flowUpsamplerNet.layers.71.affine.fAffine.4.bias", "flowUpsamplerNet.layers.71.affine.fAffine.4.logs", "flowUpsamplerNet.layers.71.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.71.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.71.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.71.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.71.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.71.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.71.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.71.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.71.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.72.actnorm.bias", "flowUpsamplerNet.layers.72.actnorm.logs", "flowUpsamplerNet.layers.72.invconv.weight", "flowUpsamplerNet.layers.72.affine.fAffine.0.weight", "flowUpsamplerNet.layers.72.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.72.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.72.affine.fAffine.2.weight", "flowUpsamplerNet.layers.72.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.72.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.72.affine.fAffine.4.weight", "flowUpsamplerNet.layers.72.affine.fAffine.4.bias", "flowUpsamplerNet.layers.72.affine.fAffine.4.logs", "flowUpsamplerNet.layers.72.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.72.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.72.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.72.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.72.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.72.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.72.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.72.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.72.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.73.actnorm.bias", "flowUpsamplerNet.layers.73.actnorm.logs", "flowUpsamplerNet.layers.73.invconv.weight", "flowUpsamplerNet.layers.73.affine.fAffine.0.weight", "flowUpsamplerNet.layers.73.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.73.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.73.affine.fAffine.2.weight", "flowUpsamplerNet.layers.73.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.73.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.73.affine.fAffine.4.weight", "flowUpsamplerNet.layers.73.affine.fAffine.4.bias", "flowUpsamplerNet.layers.73.affine.fAffine.4.logs", "flowUpsamplerNet.layers.73.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.73.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.73.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.73.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.73.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.73.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.73.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.73.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.73.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.74.actnorm.bias", "flowUpsamplerNet.layers.74.actnorm.logs", "flowUpsamplerNet.layers.74.invconv.weight", "flowUpsamplerNet.layers.74.affine.fAffine.0.weight", "flowUpsamplerNet.layers.74.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.74.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.74.affine.fAffine.2.weight", "flowUpsamplerNet.layers.74.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.74.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.74.affine.fAffine.4.weight", "flowUpsamplerNet.layers.74.affine.fAffine.4.bias", "flowUpsamplerNet.layers.74.affine.fAffine.4.logs", "flowUpsamplerNet.layers.74.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.74.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.74.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.74.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.74.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.74.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.74.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.74.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.74.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.75.actnorm.bias", "flowUpsamplerNet.layers.75.actnorm.logs", "flowUpsamplerNet.layers.75.invconv.weight", "flowUpsamplerNet.layers.75.affine.fAffine.0.weight", "flowUpsamplerNet.layers.75.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.75.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.75.affine.fAffine.2.weight", "flowUpsamplerNet.layers.75.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.75.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.75.affine.fAffine.4.weight", "flowUpsamplerNet.layers.75.affine.fAffine.4.bias", "flowUpsamplerNet.layers.75.affine.fAffine.4.logs", "flowUpsamplerNet.layers.75.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.75.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.75.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.75.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.75.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.75.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.75.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.75.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.75.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.76.actnorm.bias", "flowUpsamplerNet.layers.76.actnorm.logs", "flowUpsamplerNet.layers.76.invconv.weight", "flowUpsamplerNet.layers.76.affine.fAffine.0.weight", "flowUpsamplerNet.layers.76.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.76.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.76.affine.fAffine.2.weight", "flowUpsamplerNet.layers.76.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.76.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.76.affine.fAffine.4.weight", "flowUpsamplerNet.layers.76.affine.fAffine.4.bias", "flowUpsamplerNet.layers.76.affine.fAffine.4.logs", "flowUpsamplerNet.layers.76.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.76.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.76.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.76.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.76.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.76.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.76.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.76.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.76.affine.fFeatures.4.logs", "flowUpsamplerNet.layers.77.actnorm.bias", "flowUpsamplerNet.layers.77.actnorm.logs", "flowUpsamplerNet.layers.77.invconv.weight", "flowUpsamplerNet.layers.77.affine.fAffine.0.weight", "flowUpsamplerNet.layers.77.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.77.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.77.affine.fAffine.2.weight", "flowUpsamplerNet.layers.77.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.77.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.77.affine.fAffine.4.weight", "flowUpsamplerNet.layers.77.affine.fAffine.4.bias", "flowUpsamplerNet.layers.77.affine.fAffine.4.logs", "flowUpsamplerNet.layers.77.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.77.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.77.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.77.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.77.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.77.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.77.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.77.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.77.affine.fFeatures.4.logs". Unexpected key(s) in state_dict: "flowUpsamplerNet.layers.40.actnorm.bias", "flowUpsamplerNet.layers.40.actnorm.logs", "flowUpsamplerNet.layers.40.invconv.weight", "flowUpsamplerNet.layers.42.affine.fAffine.0.weight", "flowUpsamplerNet.layers.42.affine.fAffine.0.actnorm.bias", "flowUpsamplerNet.layers.42.affine.fAffine.0.actnorm.logs", "flowUpsamplerNet.layers.42.affine.fAffine.2.weight", "flowUpsamplerNet.layers.42.affine.fAffine.2.actnorm.bias", "flowUpsamplerNet.layers.42.affine.fAffine.2.actnorm.logs", "flowUpsamplerNet.layers.42.affine.fAffine.4.weight", "flowUpsamplerNet.layers.42.affine.fAffine.4.bias", "flowUpsamplerNet.layers.42.affine.fAffine.4.logs", "flowUpsamplerNet.layers.42.affine.fFeatures.0.weight", "flowUpsamplerNet.layers.42.affine.fFeatures.0.actnorm.bias", "flowUpsamplerNet.layers.42.affine.fFeatures.0.actnorm.logs", "flowUpsamplerNet.layers.42.affine.fFeatures.2.weight", "flowUpsamplerNet.layers.42.affine.fFeatures.2.actnorm.bias", "flowUpsamplerNet.layers.42.affine.fFeatures.2.actnorm.logs", "flowUpsamplerNet.layers.42.affine.fFeatures.4.weight", "flowUpsamplerNet.layers.42.affine.fFeatures.4.bias", "flowUpsamplerNet.layers.42.affine.fFeatures.4.logs". size mismatch for flowUpsamplerNet.layers.41.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.41.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.41.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.42.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.42.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.42.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.43.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.43.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.43.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.43.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.43.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.43.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.43.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.43.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.43.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.43.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]). size mismatch for flowUpsamplerNet.layers.44.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.44.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.44.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.44.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.44.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.44.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.44.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.44.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.44.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.44.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]). size mismatch for flowUpsamplerNet.layers.45.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.45.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.45.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.45.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.45.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.45.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.45.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.45.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.45.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.45.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]). size mismatch for flowUpsamplerNet.layers.46.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.46.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.46.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.46.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.46.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.46.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.46.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.46.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.46.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.46.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]). size mismatch for flowUpsamplerNet.layers.47.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.47.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.47.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.47.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.47.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.47.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.47.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.47.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.47.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.47.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]). size mismatch for flowUpsamplerNet.layers.48.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.48.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.48.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.48.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.48.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.48.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.48.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.48.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.48.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.48.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]). size mismatch for flowUpsamplerNet.layers.49.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.49.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.49.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.49.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.49.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.49.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.49.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.49.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.49.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.49.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]). size mismatch for flowUpsamplerNet.layers.50.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.50.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.50.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.50.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.50.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.50.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.50.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.50.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.50.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.50.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]). size mismatch for flowUpsamplerNet.layers.51.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.51.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.51.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.51.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.51.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.51.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.51.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.51.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.51.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.51.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]). size mismatch for flowUpsamplerNet.layers.52.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.52.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.52.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.52.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.52.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.52.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.52.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.52.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.52.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.52.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]). size mismatch for flowUpsamplerNet.layers.53.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.53.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.53.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.53.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.53.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.53.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.53.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.53.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.53.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.53.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]). size mismatch for flowUpsamplerNet.layers.54.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.54.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.54.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.54.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.54.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.54.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.54.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.54.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.54.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.54.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]). size mismatch for flowUpsamplerNet.layers.55.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.55.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.55.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.55.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.55.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.55.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.55.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.55.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.55.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.55.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]). size mismatch for flowUpsamplerNet.layers.56.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.56.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.56.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.56.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.56.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.56.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.56.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.56.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.56.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.56.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]). size mismatch for flowUpsamplerNet.layers.57.actnorm.bias: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.57.actnorm.logs: copying a param with shape torch.Size([1, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 48, 1, 1]). size mismatch for flowUpsamplerNet.layers.57.invconv.weight: copying a param with shape torch.Size([96, 96]) from checkpoint, the shape in current model is torch.Size([48, 48]). size mismatch for flowUpsamplerNet.layers.57.affine.fAffine.0.weight: copying a param with shape torch.Size([64, 368, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 344, 3, 3]). size mismatch for flowUpsamplerNet.layers.57.affine.fAffine.4.weight: copying a param with shape torch.Size([96, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([48, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.57.affine.fAffine.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([48]). size mismatch for flowUpsamplerNet.layers.57.affine.fAffine.4.logs: copying a param with shape torch.Size([96, 1, 1]) from checkpoint, the shape in current model is torch.Size([48, 1, 1]). size mismatch for flowUpsamplerNet.layers.57.affine.fFeatures.4.weight: copying a param with shape torch.Size([192, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 64, 3, 3]). size mismatch for flowUpsamplerNet.layers.57.affine.fFeatures.4.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for flowUpsamplerNet.layers.57.affine.fFeatures.4.logs: copying a param with shape torch.Size([192, 1, 1]) from checkpoint, the shape in current model is torch.Size([96, 1, 1]).**

andreas128 commented 3 years ago

Are you using the pretrained networks we provide?

flybiubiu commented 3 years ago

Are you using the pretrained networks we provide?

Screen Shot 2021-01-27 at 2 23 32 AM Screen Shot 2021-01-27 at 2 24 00 AM

Yes! All is origin code.

martin-danelljan commented 3 years ago

What pytorch version do you have+

flybiubiu commented 3 years ago

What pytorch version do you have+

Thx! Python 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information.

import torch print(torch.version) 1.7.1+cu110 print(torch.version.cuda) 11.0

print(torch.backends.cudnn.version()) 8005

············································································································ My GPU is 3090.I run setup code and find the cuda version is not compare.After that I reinstall with (pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html)

·············································································································

I stricted with The following commands train the Super-Resolution network using Normalizing Flow in PyTorch:

source myenv/bin/activate # Use the env you created using setup.sh cd code python train.py -opt ./confs/SRFlow_DF2K_4X.yml # Diverse Images 4X (Dataset Included) python train.py -opt ./confs/SRFlow_DF2K_8X.yml # Diverse Images 8X (Dataset Included) python train.py -opt ./confs/SRFlow_CelebA_8X.yml # Faces 8X. And the code is origin code

flybiubiu commented 3 years ago

Thx author!I run 4x and then 8x,Now its ok! Thx again

KenBCHu commented 2 years ago

@flybiubiu Could you explain how to fix this issue? I met the same issue, thanks a lot.