Closed kayabutterkun closed 8 months ago
Logs:
------------ Options ------------- batchSize: 1 data_type: 32 dataroot: /datasets/john/tryOn/Tests display_winsize: 512 fineSize: 512 gen_checkpoint: /datasets/john/tryOn/Flow-Style-VTON-Checkpoints/PFAFN_gen_epoch_101.pth gpu_ids: [0] input_nc: 3 isTrain: False loadSize: 512 max_dataset_size: inf nThreads: 1 name: demo no_flip: False norm: instance output: /work/output/2024-01-26-1434 output_nc: 3 phase: test resize_or_crop: None serial_batches: False test_pair: /datasets/john/tryOn/Tests/test_pairs.txt tf_log: False use_dropout: False verbose: False warp_checkpoint: /datasets/john/tryOn/Flow-Style-VTON-Checkpoints/PFAFN_warp_epoch_101.pth -------------- End ---------------- CustomDatasetDataLoader dataset [AlignedDataset] was created 6 AFWM( (image_features): FeatureEncoder( (encoders): ModuleList( (0): Sequential( (0): DownSample( (block): Sequential( (0): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(3, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) ) ) (1): ResBlock( (block): Sequential( (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (2): ResBlock( (block): Sequential( (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (1): Sequential( (0): DownSample( (block): Sequential( (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) ) ) (1): ResBlock( (block): Sequential( (0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (2): ResBlock( (block): Sequential( (0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (2): Sequential( (0): DownSample( (block): Sequential( (0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) ) ) (1): ResBlock( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (2): ResBlock( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (3): Sequential( (0): DownSample( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) ) ) (1): ResBlock( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (2): ResBlock( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (4): Sequential( (0): DownSample( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) ) ) (1): ResBlock( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (2): ResBlock( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) ) ) (cond_features): FeatureEncoder( (encoders): ModuleList( (0): Sequential( (0): DownSample( (block): Sequential( (0): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(3, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) ) ) (1): ResBlock( (block): Sequential( (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (2): ResBlock( (block): Sequential( (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (1): Sequential( (0): DownSample( (block): Sequential( (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) ) ) (1): ResBlock( (block): Sequential( (0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (2): ResBlock( (block): Sequential( (0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (2): Sequential( (0): DownSample( (block): Sequential( (0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) ) ) (1): ResBlock( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (2): ResBlock( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (3): Sequential( (0): DownSample( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) ) ) (1): ResBlock( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (2): ResBlock( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) (4): Sequential( (0): DownSample( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) ) ) (1): ResBlock( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (2): ResBlock( (block): Sequential( (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): ReLU(inplace) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (4): ReLU(inplace) (5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) ) ) ) (image_FPN): RefinePyramid( (adaptive): ModuleList( (0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1)) (4): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1)) ) (smooth): ModuleList( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) ) (cond_FPN): RefinePyramid( (adaptive): ModuleList( (0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1)) (4): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1)) ) (smooth): ModuleList( (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) ) (aflow_net): AFlowNet( (netRefine): ModuleList( (0): Sequential( (0): Conv2d(512, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): LeakyReLU(negative_slope=0.1) (2): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): LeakyReLU(negative_slope=0.1) (4): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (5): LeakyReLU(negative_slope=0.1) (6): Conv2d(32, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (1): Sequential( (0): Conv2d(512, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): LeakyReLU(negative_slope=0.1) (2): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): LeakyReLU(negative_slope=0.1) (4): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (5): LeakyReLU(negative_slope=0.1) (6): Conv2d(32, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (2): Sequential( (0): Conv2d(512, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): LeakyReLU(negative_slope=0.1) (2): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): LeakyReLU(negative_slope=0.1) (4): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (5): LeakyReLU(negative_slope=0.1) (6): Conv2d(32, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (3): Sequential( (0): Conv2d(512, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): LeakyReLU(negative_slope=0.1) (2): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): LeakyReLU(negative_slope=0.1) (4): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (5): LeakyReLU(negative_slope=0.1) (6): Conv2d(32, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (4): Sequential( (0): Conv2d(512, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): LeakyReLU(negative_slope=0.1) (2): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): LeakyReLU(negative_slope=0.1) (4): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (5): LeakyReLU(negative_slope=0.1) (6): Conv2d(32, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) ) (netStyle): ModuleList( (0): StyledConvBlock( (conv0): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=256, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn0): LeakyReLU(negative_slope=0.2, inplace) (conv1): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=49, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn1): LeakyReLU(negative_slope=0.2, inplace) ) (1): StyledConvBlock( (conv0): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=256, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn0): LeakyReLU(negative_slope=0.2, inplace) (conv1): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=49, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn1): LeakyReLU(negative_slope=0.2, inplace) ) (2): StyledConvBlock( (conv0): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=256, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn0): LeakyReLU(negative_slope=0.2, inplace) (conv1): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=49, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn1): LeakyReLU(negative_slope=0.2, inplace) ) (3): StyledConvBlock( (conv0): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=256, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn0): LeakyReLU(negative_slope=0.2, inplace) (conv1): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=49, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn1): LeakyReLU(negative_slope=0.2, inplace) ) (4): StyledConvBlock( (conv0): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=256, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn0): LeakyReLU(negative_slope=0.2, inplace) (conv1): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=49, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn1): LeakyReLU(negative_slope=0.2, inplace) ) ) (netF): ModuleList( (0): Styled_F_ConvBlock( (conv0): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=49, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn0): LeakyReLU(negative_slope=0.2, inplace) (conv1): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=128, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) ) (1): Styled_F_ConvBlock( (conv0): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=49, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn0): LeakyReLU(negative_slope=0.2, inplace) (conv1): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=128, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) ) (2): Styled_F_ConvBlock( (conv0): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=49, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn0): LeakyReLU(negative_slope=0.2, inplace) (conv1): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=128, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) ) (3): Styled_F_ConvBlock( (conv0): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=49, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn0): LeakyReLU(negative_slope=0.2, inplace) (conv1): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=128, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) ) (4): Styled_F_ConvBlock( (conv0): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=49, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (actvn0): LeakyReLU(negative_slope=0.2, inplace) (conv1): ModulatedConv2d( (mlp_class_std): EqualLinear( (linear): Linear(in_features=256, out_features=128, bias=True) ) (padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) ) ) (cond_style): Sequential( (0): Conv2d(256, 128, kernel_size=(8, 6), stride=(1, 1)) (1): LeakyReLU(negative_slope=0.1) ) (image_style): Sequential( (0): Conv2d(256, 128, kernel_size=(8, 6), stride=(1, 1)) (1): LeakyReLU(negative_slope=0.1) ) ) ) ############################### /datasets/john/tryOn/Flow-Style-VTON-Checkpoints/PFAFN_warp_epoch_101.pth /datasets/john/tryOn/Flow-Style-VTON-Checkpoints/PFAFN_warp_epoch_101.pth No checkpoint! /datasets/john/tryOn/Flow-Style-VTON-Checkpoints/PFAFN_gen_epoch_101.pth No checkpoint! /usr/local/lib/python3.6/site-packages/torch/nn/functional.py:2539: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) ['016962_0.jpg'] tensor([[[[ 0.0123, -0.0173, -0.0079, ..., -0.0007, -0.0003, -0.0209], [ 0.0056, -0.0065, -0.0189, ..., -0.0412, -0.0389, -0.0472], [ 0.0180, -0.0155, -0.0281, ..., -0.0524, -0.0492, -0.0567], ..., [ 0.0518, 0.0434, 0.0337, ..., 0.0086, 0.0104, -0.0182], [ 0.0531, 0.0435, 0.0346, ..., 0.0108, 0.0124, -0.0167], [ 0.0602, 0.0384, 0.0341, ..., 0.0229, 0.0233, 0.0078]],
[[-0.0131, -0.0698, -0.0633, ..., -0.0459, -0.0438, -0.0486],
[-0.0237, -0.0781, -0.0694, ..., -0.0725, -0.0693, -0.0836],
[-0.0076, -0.0751, -0.0741, ..., -0.0856, -0.0838, -0.1058],
...,
[-0.0147, -0.0902, -0.0975, ..., -0.1009, -0.1019, -0.1087],
[-0.0158, -0.0910, -0.0983, ..., -0.0993, -0.1004, -0.1077],
[-0.0067, -0.0219, -0.0362, ..., -0.0487, -0.0509, -0.0395]],
[[ 0.0171, 0.0278, 0.0342, ..., 0.0439, 0.0455, 0.0540],
[ 0.0395, 0.0132, 0.0324, ..., 0.0316, 0.0324, 0.0061],
[ 0.0399, 0.0409, 0.0494, ..., 0.0409, 0.0419, 0.0016],
...,
[ 0.0323, 0.0158, 0.0168, ..., 0.0182, 0.0189, -0.0224],
[ 0.0332, 0.0159, 0.0177, ..., 0.0212, 0.0221, -0.0211],
[ 0.0193, 0.0035, 0.0128, ..., 0.0075, 0.0080, -0.0325]]]],
grad_fn=<AddBackward0>)
['015794_0.jpg'] tensor([[[[ 0.0138, -0.0189, -0.0087, ..., -0.0007, 0.0001, -0.0188], [ 0.0072, -0.0074, -0.0213, ..., -0.0388, -0.0366, -0.0436], [ 0.0200, -0.0164, -0.0309, ..., -0.0511, -0.0481, -0.0533], ..., [ 0.0583, 0.0483, 0.0376, ..., 0.0088, 0.0108, -0.0151], [ 0.0597, 0.0485, 0.0386, ..., 0.0108, 0.0125, -0.0137], [ 0.0678, 0.0427, 0.0381, ..., 0.0221, 0.0225, 0.0084]],
[[-0.0137, -0.0761, -0.0686, ..., -0.0445, -0.0423, -0.0455],
[-0.0259, -0.0841, -0.0741, ..., -0.0708, -0.0675, -0.0795],
[-0.0089, -0.0811, -0.0799, ..., -0.0839, -0.0820, -0.1008],
...,
[-0.0166, -0.1014, -0.1097, ..., -0.0946, -0.0955, -0.0997],
[-0.0178, -0.1023, -0.1106, ..., -0.0931, -0.0942, -0.0989],
[-0.0077, -0.0247, -0.0409, ..., -0.0468, -0.0487, -0.0371]],
[[ 0.0183, 0.0300, 0.0383, ..., 0.0402, 0.0417, 0.0500],
[ 0.0418, 0.0134, 0.0349, ..., 0.0286, 0.0297, 0.0059],
[ 0.0432, 0.0444, 0.0539, ..., 0.0369, 0.0380, 0.0013],
...,
[ 0.0363, 0.0175, 0.0187, ..., 0.0150, 0.0158, -0.0206],
[ 0.0372, 0.0175, 0.0197, ..., 0.0176, 0.0185, -0.0195],
[ 0.0216, 0.0036, 0.0143, ..., 0.0057, 0.0065, -0.0288]]]],
grad_fn=<AddBackward0>)
['014834_0.jpg'] tensor([[[[ 0.0143, -0.0199, -0.0092, ..., -0.0011, -0.0005, -0.0217], [ 0.0074, -0.0084, -0.0230, ..., -0.0445, -0.0419, -0.0498], [ 0.0200, -0.0173, -0.0327, ..., -0.0576, -0.0540, -0.0607], ..., [ 0.0537, 0.0449, 0.0349, ..., 0.0082, 0.0103, -0.0197], [ 0.0551, 0.0454, 0.0362, ..., 0.0108, 0.0127, -0.0178], [ 0.0628, 0.0391, 0.0353, ..., 0.0232, 0.0238, 0.0079]],
[[-0.0141, -0.0793, -0.0714, ..., -0.0498, -0.0477, -0.0523],
[-0.0271, -0.0874, -0.0768, ..., -0.0790, -0.0755, -0.0901],
[-0.0106, -0.0851, -0.0838, ..., -0.0941, -0.0920, -0.1142],
...,
[-0.0166, -0.0966, -0.1052, ..., -0.1093, -0.1103, -0.1160],
[-0.0177, -0.0972, -0.1059, ..., -0.1075, -0.1086, -0.1149],
[-0.0082, -0.0247, -0.0400, ..., -0.0534, -0.0556, -0.0423]],
[[ 0.0188, 0.0310, 0.0403, ..., 0.0465, 0.0481, 0.0570],
[ 0.0426, 0.0134, 0.0358, ..., 0.0340, 0.0347, 0.0067],
[ 0.0444, 0.0457, 0.0557, ..., 0.0433, 0.0443, 0.0018],
...,
[ 0.0334, 0.0160, 0.0172, ..., 0.0184, 0.0190, -0.0246],
[ 0.0342, 0.0159, 0.0178, ..., 0.0213, 0.0222, -0.0233],
[ 0.0196, 0.0029, 0.0127, ..., 0.0069, 0.0075, -0.0349]]]],
grad_fn=<AddBackward0>)
['005510_0.jpg'] tensor([[[[ 0.0143, -0.0194, -0.0088, ..., -0.0004, 0.0001, -0.0230], [ 0.0066, -0.0074, -0.0213, ..., -0.0447, -0.0423, -0.0513], [ 0.0209, -0.0171, -0.0314, ..., -0.0580, -0.0544, -0.0616], ..., [ 0.0599, 0.0501, 0.0401, ..., 0.0107, 0.0132, -0.0185], [ 0.0614, 0.0502, 0.0412, ..., 0.0131, 0.0152, -0.0168], [ 0.0703, 0.0444, 0.0405, ..., 0.0262, 0.0270, 0.0098]],
[[-0.0146, -0.0791, -0.0715, ..., -0.0514, -0.0491, -0.0536],
[-0.0267, -0.0885, -0.0782, ..., -0.0819, -0.0785, -0.0926],
[-0.0083, -0.0849, -0.0833, ..., -0.0957, -0.0937, -0.1169],
...,
[-0.0170, -0.1036, -0.1119, ..., -0.1128, -0.1140, -0.1196],
[-0.0181, -0.1044, -0.1128, ..., -0.1111, -0.1124, -0.1186],
[-0.0079, -0.0260, -0.0426, ..., -0.0559, -0.0581, -0.0444]],
[[ 0.0194, 0.0316, 0.0392, ..., 0.0481, 0.0499, 0.0595],
[ 0.0447, 0.0149, 0.0370, ..., 0.0343, 0.0352, 0.0061],
[ 0.0454, 0.0467, 0.0567, ..., 0.0436, 0.0447, 0.0009],
...,
[ 0.0363, 0.0177, 0.0190, ..., 0.0183, 0.0190, -0.0246],
[ 0.0372, 0.0177, 0.0199, ..., 0.0214, 0.0224, -0.0234],
[ 0.0214, 0.0036, 0.0148, ..., 0.0074, 0.0081, -0.0348]]]],
grad_fn=<AddBackward0>)
['004912_0.jpg'] tensor([[[[ 0.0142, -0.0192, -0.0088, ..., -0.0007, -0.0003, -0.0202], [ 0.0075, -0.0076, -0.0219, ..., -0.0399, -0.0378, -0.0458], [ 0.0204, -0.0163, -0.0313, ..., -0.0508, -0.0477, -0.0550], ..., [ 0.0533, 0.0447, 0.0347, ..., 0.0072, 0.0088, -0.0161], [ 0.0547, 0.0448, 0.0356, ..., 0.0092, 0.0105, -0.0148], [ 0.0620, 0.0395, 0.0351, ..., 0.0203, 0.0205, 0.0070]],
[[-0.0138, -0.0774, -0.0697, ..., -0.0445, -0.0424, -0.0471],
[-0.0263, -0.0855, -0.0751, ..., -0.0702, -0.0672, -0.0811],
[-0.0092, -0.0825, -0.0811, ..., -0.0830, -0.0812, -0.1026],
...,
[-0.0152, -0.0930, -0.1005, ..., -0.0899, -0.0907, -0.0965],
[-0.0163, -0.0937, -0.1013, ..., -0.0885, -0.0894, -0.0956],
[-0.0070, -0.0226, -0.0373, ..., -0.0431, -0.0451, -0.0350]],
[[ 0.0185, 0.0305, 0.0393, ..., 0.0426, 0.0442, 0.0524],
[ 0.0423, 0.0135, 0.0356, ..., 0.0306, 0.0315, 0.0059],
[ 0.0440, 0.0452, 0.0551, ..., 0.0396, 0.0406, 0.0015],
...,
[ 0.0333, 0.0163, 0.0173, ..., 0.0162, 0.0170, -0.0198],
[ 0.0342, 0.0163, 0.0183, ..., 0.0188, 0.0198, -0.0187],
[ 0.0199, 0.0036, 0.0132, ..., 0.0064, 0.0069, -0.0288]]]],
grad_fn=<AddBackward0>)
['000066_0.jpg'] tensor([[[[ 0.0146, -0.0198, -0.0091, ..., -0.0012, -0.0006, -0.0234], [ 0.0077, -0.0079, -0.0225, ..., -0.0468, -0.0441, -0.0529], [ 0.0209, -0.0168, -0.0322, ..., -0.0595, -0.0558, -0.0635], ..., [ 0.0544, 0.0455, 0.0354, ..., 0.0099, 0.0119, -0.0188], [ 0.0557, 0.0456, 0.0363, ..., 0.0123, 0.0140, -0.0171], [ 0.0632, 0.0403, 0.0358, ..., 0.0256, 0.0260, 0.0091]],
[[-0.0142, -0.0797, -0.0717, ..., -0.0515, -0.0493, -0.0546],
[-0.0271, -0.0879, -0.0772, ..., -0.0818, -0.0782, -0.0938],
[-0.0095, -0.0849, -0.0834, ..., -0.0967, -0.0945, -0.1186],
...,
[-0.0155, -0.0948, -0.1024, ..., -0.1093, -0.1103, -0.1171],
[-0.0166, -0.0956, -0.1032, ..., -0.1076, -0.1087, -0.1161],
[-0.0071, -0.0230, -0.0380, ..., -0.0529, -0.0552, -0.0429]],
[[ 0.0190, 0.0314, 0.0404, ..., 0.0496, 0.0512, 0.0605],
[ 0.0435, 0.0139, 0.0366, ..., 0.0358, 0.0366, 0.0067],
[ 0.0452, 0.0465, 0.0566, ..., 0.0459, 0.0470, 0.0017],
...,
[ 0.0340, 0.0166, 0.0176, ..., 0.0194, 0.0203, -0.0240],
[ 0.0349, 0.0166, 0.0186, ..., 0.0226, 0.0237, -0.0226],
[ 0.0202, 0.0036, 0.0135, ..., 0.0079, 0.0086, -0.0346]]]],
grad_fn=<AddBackward0>)
Hello! I am using the same package version but my results are very off. What am I missing?
Python 3.6.13 PyTorch version: 1.1.0 Torchvision version: 0.3.0 CV2 version: 3.4.3