pytorch / TensorRT

PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
https://pytorch.org/TensorRT
BSD 3-Clause "New" or "Revised" License
2.54k stars 349 forks source link

🐛 [Bug] Encountered bug IValue assuming it was N2at6TensorE however type is NoneType when using TRTorch #594

Closed p1x31 closed 2 years ago

p1x31 commented 3 years ago

Bug Description

I'm trying to compile but getting this error:


Traceback (most recent call last):
File "test_face.py", line 36, in <module>
generated = model(data_i, mode="inference")
File "anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/Face_Enhancement/models/pix2pix_model.py", line 50, in forward
fake_image, _ = self.generate_fake(input_semantics, degraded_image, real_image)
File "/Face_Enhancement/models/pix2pix_model.py", line 215, in generate_fake
trt_ts_module = trtorch.compile(model, compile_settings)
File "anaconda3/lib/python3.8/site-packages/trtorch/_compiler.py", line 81, in compile
compiled_cpp_mod = trtorch._C.compile_graph(module._c, _parse_compile_spec(compile_spec))
RuntimeError: [Error thrown at ./core/conversion/var/Var_inl.h:37] Expected ivalue->isTensor() to be true but got false
Requested unwrapping of arg IValue assuming it was N2at6TensorE however type is NoneType

To Reproduce

Steps to reproduce the behavior: graph:

graph(%self.1 : __torch__.models.networks.generator.SPADEGenerator,
      %input.1 : Tensor,
      %input0.1 : Tensor):
  %27 : NoneType = prim::Constant() # :0:0
  %26 : bool = prim::Constant[value=0]() # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0
  %24 : int = prim::Constant[value=16]() # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0
  %78 : float = prim::Constant[value=0.20000000000000001]() # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0
  %4 : __torch__.torch.nn.modules.conv.___torch_mangle_146.Conv2d = prim::GetAttr[name="conv_img"](%self.1)
  %6 : __torch__.models.networks.architecture.___torch_mangle_145.SPADEResnetBlock = prim::GetAttr[name="up_3"](%self.1)
  %8 : __torch__.models.networks.architecture.___torch_mangle_120.SPADEResnetBlock = prim::GetAttr[name="up_2"](%self.1)
  %10 : __torch__.models.networks.architecture.___torch_mangle_95.SPADEResnetBlock = prim::GetAttr[name="up_1"](%self.1)
  %12 : __torch__.models.networks.architecture.___torch_mangle_70.SPADEResnetBlock = prim::GetAttr[name="up_0"](%self.1)
  %14 : __torch__.models.networks.architecture.___torch_mangle_45.SPADEResnetBlock = prim::GetAttr[name="G_middle_1"](%self.1)
  %16 : __torch__.models.networks.architecture.___torch_mangle_28.SPADEResnetBlock = prim::GetAttr[name="G_middle_0"](%self.1)
  %18 : __torch__.torch.nn.modules.upsampling.Upsample = prim::GetAttr[name="up"](%self.1)
  %20 : __torch__.models.networks.architecture.SPADEResnetBlock = prim::GetAttr[name="head_0"](%self.1)
  %22 : __torch__.torch.nn.modules.conv.Conv2d = prim::GetAttr[name="fc"](%self.1)
  %25 : int[] = prim::ListConstruct(%24, %24)
  %input1.1 : Tensor = aten::upsample_bilinear2d(%input0.1, %25, %26, %27, %27) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0
  %34 : Tensor = prim::CallMethod[name="forward"](%22, %input1.1) # :0:0
  %37 : Tensor = prim::CallMethod[name="forward"](%20, %34, %input.1, %input0.1) # :0:0
  %41 : Tensor = prim::CallMethod[name="forward"](%18, %37) # :0:0
  %44 : Tensor = prim::CallMethod[name="forward"](%16, %41, %input.1, %input0.1) # :0:0
  %50 : Tensor = prim::CallMethod[name="forward"](%14, %44, %input.1, %input0.1) # :0:0
  %51 : Tensor = prim::CallMethod[name="forward1"](%18, %50) # :0:0
  %57 : Tensor = prim::CallMethod[name="forward"](%12, %51, %input.1, %input0.1) # :0:0
  %58 : Tensor = prim::CallMethod[name="forward2"](%18, %57) # :0:0
  %64 : Tensor = prim::CallMethod[name="forward"](%10, %58, %input.1, %input0.1) # :0:0
  %65 : Tensor = prim::CallMethod[name="forward3"](%18, %64) # :0:0
  %71 : Tensor = prim::CallMethod[name="forward"](%8, %65, %input.1, %input0.1) # :0:0
  %72 : Tensor = prim::CallMethod[name="forward4"](%18, %71) # :0:0
  %77 : Tensor = prim::CallMethod[name="forward"](%6, %72, %input.1, %input0.1) # :0:0
  %input2.1 : Tensor = aten::leaky_relu(%77, %78) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0
  %82 : Tensor = prim::CallMethod[name="forward"](%4, %input2.1) # :0:0
  %83 : Tensor = aten::tanh(%82) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1699:0
  return (%83)

Compile settings:


model = model.to('cuda')
        compile_settings = {
            #"input_shapes": [img_L.shape],
            "inputs": [trtorch.Input(
                    min_shape = [1, 18, 64, 64],
                    opt_shape = [1, 18, 512, 512],
                    max_shape = [1, 18, 512, 512],
                # For static size shape=[1, 3, 224, 224]
                    dtype = torch.half,
            ), trtorch.Input(
                    min_shape = [1, 3, 64, 64],
                    opt_shape = [1, 3, 512, 512],
                    max_shape = [1, 3, 512, 512],
                # For static size shape=[1, 3, 224, 224]
                    dtype = torch.half,
            )],
            "enabled_precisions": {torch.half}, # Run with FP16
            "workspace_size": 1 << 20,
            "truncate_long_and_double": True,
        }
        trt_ts_module = trtorch.compile(model, compile_settings)
        torch.jit.save(trt_ts_module, "trt_8_cuda_512.ts")
        model = trt_ts_module

Expected behaviour

Compile like torchscript

Environment

Build information about the TRTorch compiler can be found by turning on debug messages

Additional context

Does it expect one more argument of NoneType? If so how can I specify NoneType in settings and compile with it?

narendasan commented 3 years ago

Can you turn on Debug logging and share the log (or at least the parts around the error)?

Does it expect one more argument of NoneType? If so how can I specify NoneType in settings and compile with it?

No this error occurs when an argument to a converter is expected to be an at::Tensor but instead None was provided. There shouldnt be anything you need to do, its a bug in which ever converter is handling this incorrectly. The debug log will tell us which one is the issue

p1x31 commented 3 years ago

Here is the debug log, just to provide you with a little bit more information the net is a Variational autoencoder It computes the sampling operation of z by reparameterization of mu and logvar to make the network differentiable during training. However, during inference, z is assigned to None Stock Pytorch inference call

def generate_fake(self, input_semantics, degraded_image, real_image, compute_kld_loss=False):
        z = None
        fake_image = self.netG(input_semantics, degraded_image, z=z)

It was traced with only two arguments input = torch.rand(1, 18, 512, 512).to("cuda"), torch.rand(1, 3, 512, 512).to("cuda")

Torchscript inference call: fake_image = self.netG(input_semantics, degraded_image) Returns true:


print(trtorch.check_method_op_support(model, 'forward')) 

Maybe this is the issue DEBUG: [TRTorch Conversion Context] - Evaluating %680 : NoneType = prim::Constant()

Debug log TRTorch Version: 0.4.0a0 Using TensorRT Version: 8.0.1.6 PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb) - OpenMP 201511 (a.k.a. OpenMP 4.5) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 11.1 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37 - CuDNN 8.0.5 - Magma 2.5.2 - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, ``` DEBUG: [TRTorch] - Settings requested for Lowering: Forced Fallback Modules: [ ] DEBUG: [TRTorch] - After marking operations for torch fallback: graph(%input.2 : Tensor, %input0.2 : Tensor): %678 : int[] = prim::Constant[value=[16, 16]]() %679 : bool = prim::Constant[value=0]() %680 : NoneType = prim::Constant() %self.fc.weight : Float(1024, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.fc.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %683 : int[] = prim::Constant[value=[1, 1]]() %684 : int[] = prim::Constant[value=[0, 0]]() %685 : int = prim::Constant[value=1]() %686 : bool = prim::Constant[value=1]() %self.head_0.norm_0.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %689 : float = prim::Constant[value=0.10000000000000001]() %690 : float = prim::Constant[value=1.0000000000000001e-05]() %691 : int = prim::Constant[value=2]() %692 : int = prim::Constant[value=3]() %self.head_0.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %699 : Tensor = prim::Constant[value={1}]() %700 : float = prim::Constant[value=0.20000000000000001]() %701 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.conv_0.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %711 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.conv_1.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %713 : float[] = prim::Constant[value=[2., 2.]]() %self.G_middle_0.norm_0.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %722 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.conv_0.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %732 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.conv_1.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %742 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.conv_0.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %752 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.conv_1.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %762 : Float(512, 1024, 1, 1, strides=[1024, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %769 : Float(512, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.conv_0.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.param_free_norm.running_mean : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.param_free_norm.running_var : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_gamma.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_gamma.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_beta.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_beta.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %779 : Float(512, 512, 3, 3, strides=[4608, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.conv_1.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.param_free_norm.running_mean : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.param_free_norm.running_var : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_gamma.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_gamma.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_beta.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_beta.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %789 : Float(256, 512, 1, 1, strides=[512, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_gamma.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_gamma.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_beta.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_beta.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %796 : Float(256, 512, 3, 3, strides=[4608, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.conv_0.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.param_free_norm.running_mean : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.param_free_norm.running_var : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_gamma.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_gamma.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_beta.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_beta.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %806 : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.conv_1.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.param_free_norm.running_mean : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.param_free_norm.running_var : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_gamma.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_gamma.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_beta.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_beta.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %816 : Float(128, 256, 1, 1, strides=[256, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_gamma.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_gamma.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_beta.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_beta.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %823 : Float(128, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.conv_0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.param_free_norm.running_mean : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.param_free_norm.running_var : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_gamma.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_gamma.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_beta.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_beta.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %833 : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.conv_1.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.param_free_norm.running_mean : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.param_free_norm.running_var : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_gamma.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_gamma.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_beta.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_beta.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %843 : Float(64, 128, 1, 1, strides=[128, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_gamma.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_gamma.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_beta.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_beta.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %850 : Float(64, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.conv_0.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.param_free_norm.running_mean : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.param_free_norm.running_var : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_gamma.weight : Float(64, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_gamma.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_beta.weight : Float(64, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_beta.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %860 : Float(64, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.conv_1.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.conv_img.weight : Float(3, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.conv_img.bias : Float(3, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value= 0.1178 -0.0640 -0.1034 [ CUDAFloatType{3} ]]() %input1.3 : Tensor = aten::upsample_bilinear2d(%input0.2, %678, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.5 : Tensor = aten::_convolution(%input1.3, %self.fc.weight, %self.fc.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.3 : Tensor = aten::batch_norm(%input0.5, %680, %680, %self.head_0.norm_0.param_free_norm.running_mean, %self.head_0.norm_0.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %867 : int = aten::size(%input0.5, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %868 : int = aten::size(%input0.5, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %869 : int[] = prim::ListConstruct(%867, %868) %input1.5 : Tensor = aten::upsample_bilinear2d(%input0.2, %869, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.7 : Tensor = aten::_convolution(%input1.5, %self.head_0.norm_0.mlp_shared.0.weight, %self.head_0.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %872 : Tensor = aten::relu(%input0.7) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.3 : Tensor = aten::_convolution(%872, %self.head_0.norm_0.mlp_gamma.weight, %self.head_0.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.3 : Tensor = aten::_convolution(%872, %self.head_0.norm_0.mlp_beta.weight, %self.head_0.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %875 : Tensor = aten::add(%gamma.3, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %876 : Tensor = aten::mul(%normalized.3, %875) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.7 : Tensor = aten::add(%876, %beta.3, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.7 : Tensor = aten::leaky_relu(%input2.7, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.9 : Tensor = aten::_convolution(%input1.7, %701, %self.head_0.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.5 : Tensor = aten::batch_norm(%input0.9, %680, %680, %self.head_0.norm_1.param_free_norm.running_mean, %self.head_0.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %881 : int = aten::size(%input0.9, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %882 : int = aten::size(%input0.9, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %883 : int[] = prim::ListConstruct(%881, %882) %input1.9 : Tensor = aten::upsample_bilinear2d(%input0.2, %883, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.11 : Tensor = aten::_convolution(%input1.9, %self.head_0.norm_1.mlp_shared.0.weight, %self.head_0.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %886 : Tensor = aten::relu(%input0.11) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.5 : Tensor = aten::_convolution(%886, %self.head_0.norm_1.mlp_gamma.weight, %self.head_0.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.5 : Tensor = aten::_convolution(%886, %self.head_0.norm_1.mlp_beta.weight, %self.head_0.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %889 : Tensor = aten::add(%gamma.5, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %890 : Tensor = aten::mul(%normalized.5, %889) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.9 : Tensor = aten::add(%890, %beta.5, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.11 : Tensor = aten::leaky_relu(%input2.9, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.2 : Tensor = aten::_convolution(%input2.11, %711, %self.head_0.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.2 : Tensor = aten::add(%input0.5, %dx.2, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.4 : Tensor = aten::upsample_nearest2d(%input3.2, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.7 : Tensor = aten::batch_norm(%input.4, %680, %680, %self.G_middle_0.norm_0.param_free_norm.running_mean, %self.G_middle_0.norm_0.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %897 : int = aten::size(%input.4, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %898 : int = aten::size(%input.4, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %899 : int[] = prim::ListConstruct(%897, %898) %input1.11 : Tensor = aten::upsample_bilinear2d(%input0.2, %899, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.13 : Tensor = aten::_convolution(%input1.11, %self.G_middle_0.norm_0.mlp_shared.0.weight, %self.G_middle_0.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %902 : Tensor = aten::relu(%input0.13) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.7 : Tensor = aten::_convolution(%902, %self.G_middle_0.norm_0.mlp_gamma.weight, %self.G_middle_0.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.7 : Tensor = aten::_convolution(%902, %self.G_middle_0.norm_0.mlp_beta.weight, %self.G_middle_0.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %905 : Tensor = aten::add(%gamma.7, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %906 : Tensor = aten::mul(%normalized.7, %905) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.13 : Tensor = aten::add(%906, %beta.7, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.13 : Tensor = aten::leaky_relu(%input2.13, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.15 : Tensor = aten::_convolution(%input1.13, %722, %self.G_middle_0.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.9 : Tensor = aten::batch_norm(%input0.15, %680, %680, %self.G_middle_0.norm_1.param_free_norm.running_mean, %self.G_middle_0.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %911 : int = aten::size(%input0.15, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %912 : int = aten::size(%input0.15, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %913 : int[] = prim::ListConstruct(%911, %912) %input1.15 : Tensor = aten::upsample_bilinear2d(%input0.2, %913, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.17 : Tensor = aten::_convolution(%input1.15, %self.G_middle_0.norm_1.mlp_shared.0.weight, %self.G_middle_0.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %916 : Tensor = aten::relu(%input0.17) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.9 : Tensor = aten::_convolution(%916, %self.G_middle_0.norm_1.mlp_gamma.weight, %self.G_middle_0.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.9 : Tensor = aten::_convolution(%916, %self.G_middle_0.norm_1.mlp_beta.weight, %self.G_middle_0.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %919 : Tensor = aten::add(%gamma.9, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %920 : Tensor = aten::mul(%normalized.9, %919) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.15 : Tensor = aten::add(%920, %beta.9, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.17 : Tensor = aten::leaky_relu(%input2.15, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.4 : Tensor = aten::_convolution(%input2.17, %732, %self.G_middle_0.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.4 : Tensor = aten::add(%input.4, %dx.4, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %normalized.11 : Tensor = aten::batch_norm(%input3.4, %680, %680, %self.G_middle_1.norm_0.param_free_norm.running_mean, %self.G_middle_1.norm_0.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %926 : int = aten::size(%input3.4, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %927 : int = aten::size(%input3.4, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %928 : int[] = prim::ListConstruct(%926, %927) %input1.17 : Tensor = aten::upsample_bilinear2d(%input0.2, %928, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.19 : Tensor = aten::_convolution(%input1.17, %self.G_middle_1.norm_0.mlp_shared.0.weight, %self.G_middle_1.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %931 : Tensor = aten::relu(%input0.19) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.11 : Tensor = aten::_convolution(%931, %self.G_middle_1.norm_0.mlp_gamma.weight, %self.G_middle_1.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.11 : Tensor = aten::_convolution(%931, %self.G_middle_1.norm_0.mlp_beta.weight, %self.G_middle_1.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %934 : Tensor = aten::add(%gamma.11, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %935 : Tensor = aten::mul(%normalized.11, %934) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.19 : Tensor = aten::add(%935, %beta.11, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.19 : Tensor = aten::leaky_relu(%input2.19, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.21 : Tensor = aten::_convolution(%input1.19, %742, %self.G_middle_1.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.13 : Tensor = aten::batch_norm(%input0.21, %680, %680, %self.G_middle_1.norm_1.param_free_norm.running_mean, %self.G_middle_1.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %940 : int = aten::size(%input0.21, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %941 : int = aten::size(%input0.21, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %942 : int[] = prim::ListConstruct(%940, %941) %input1.21 : Tensor = aten::upsample_bilinear2d(%input0.2, %942, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.23 : Tensor = aten::_convolution(%input1.21, %self.G_middle_1.norm_1.mlp_shared.0.weight, %self.G_middle_1.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %945 : Tensor = aten::relu(%input0.23) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.13 : Tensor = aten::_convolution(%945, %self.G_middle_1.norm_1.mlp_gamma.weight, %self.G_middle_1.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.13 : Tensor = aten::_convolution(%945, %self.G_middle_1.norm_1.mlp_beta.weight, %self.G_middle_1.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %948 : Tensor = aten::add(%gamma.13, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %949 : Tensor = aten::mul(%normalized.13, %948) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.21 : Tensor = aten::add(%949, %beta.13, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.23 : Tensor = aten::leaky_relu(%input2.21, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.6 : Tensor = aten::_convolution(%input2.23, %752, %self.G_middle_1.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.6 : Tensor = aten::add(%input3.4, %dx.6, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.6 : Tensor = aten::upsample_nearest2d(%input3.6, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.15 : Tensor = aten::batch_norm(%input.6, %680, %680, %self.up_0.norm_s.param_free_norm.running_mean, %self.up_0.norm_s.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %956 : int = aten::size(%input.6, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %957 : int = aten::size(%input.6, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %958 : int[] = prim::ListConstruct(%956, %957) %input1.23 : Tensor = aten::upsample_bilinear2d(%input0.2, %958, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.25 : Tensor = aten::_convolution(%input1.23, %self.up_0.norm_s.mlp_shared.0.weight, %self.up_0.norm_s.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %961 : Tensor = aten::relu(%input0.25) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.15 : Tensor = aten::_convolution(%961, %self.up_0.norm_s.mlp_gamma.weight, %self.up_0.norm_s.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.15 : Tensor = aten::_convolution(%961, %self.up_0.norm_s.mlp_beta.weight, %self.up_0.norm_s.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %964 : Tensor = aten::add(%gamma.15, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %965 : Tensor = aten::mul(%normalized.15, %964) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.25 : Tensor = aten::add(%965, %beta.15, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %x_s.2 : Tensor = aten::_convolution(%input2.25, %762, %680, %683, %684, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input0.27 : Tensor = aten::_convolution(%input1.23, %self.up_0.norm_0.mlp_shared.0.weight, %self.up_0.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %969 : Tensor = aten::relu(%input0.27) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.17 : Tensor = aten::_convolution(%969, %self.up_0.norm_0.mlp_gamma.weight, %self.up_0.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.17 : Tensor = aten::_convolution(%969, %self.up_0.norm_0.mlp_beta.weight, %self.up_0.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %972 : Tensor = aten::add(%gamma.17, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %973 : Tensor = aten::mul(%normalized.15, %972) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.27 : Tensor = aten::add(%973, %beta.17, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.27 : Tensor = aten::leaky_relu(%input2.27, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.29 : Tensor = aten::_convolution(%input1.27, %769, %self.up_0.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.19 : Tensor = aten::batch_norm(%input0.29, %680, %680, %self.up_0.norm_1.param_free_norm.running_mean, %self.up_0.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %978 : int = aten::size(%input0.29, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %979 : int = aten::size(%input0.29, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %980 : int[] = prim::ListConstruct(%978, %979) %input1.29 : Tensor = aten::upsample_bilinear2d(%input0.2, %980, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.31 : Tensor = aten::_convolution(%input1.29, %self.up_0.norm_1.mlp_shared.0.weight, %self.up_0.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %983 : Tensor = aten::relu(%input0.31) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.19 : Tensor = aten::_convolution(%983, %self.up_0.norm_1.mlp_gamma.weight, %self.up_0.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.19 : Tensor = aten::_convolution(%983, %self.up_0.norm_1.mlp_beta.weight, %self.up_0.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %986 : Tensor = aten::add(%gamma.19, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %987 : Tensor = aten::mul(%normalized.19, %986) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.29 : Tensor = aten::add(%987, %beta.19, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.31 : Tensor = aten::leaky_relu(%input2.29, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.8 : Tensor = aten::_convolution(%input2.31, %779, %self.up_0.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.8 : Tensor = aten::add(%x_s.2, %dx.8, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.8 : Tensor = aten::upsample_nearest2d(%input3.8, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.21 : Tensor = aten::batch_norm(%input.8, %680, %680, %self.up_1.norm_s.param_free_norm.running_mean, %self.up_1.norm_s.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %994 : int = aten::size(%input.8, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %995 : int = aten::size(%input.8, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %996 : int[] = prim::ListConstruct(%994, %995) %input1.31 : Tensor = aten::upsample_bilinear2d(%input0.2, %996, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.33 : Tensor = aten::_convolution(%input1.31, %self.up_1.norm_s.mlp_shared.0.weight, %self.up_1.norm_s.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %999 : Tensor = aten::relu(%input0.33) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.21 : Tensor = aten::_convolution(%999, %self.up_1.norm_s.mlp_gamma.weight, %self.up_1.norm_s.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.21 : Tensor = aten::_convolution(%999, %self.up_1.norm_s.mlp_beta.weight, %self.up_1.norm_s.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1002 : Tensor = aten::add(%gamma.21, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1003 : Tensor = aten::mul(%normalized.21, %1002) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.33 : Tensor = aten::add(%1003, %beta.21, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %x_s.4 : Tensor = aten::_convolution(%input2.33, %789, %680, %683, %684, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input0.35 : Tensor = aten::_convolution(%input1.31, %self.up_1.norm_0.mlp_shared.0.weight, %self.up_1.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1007 : Tensor = aten::relu(%input0.35) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.23 : Tensor = aten::_convolution(%1007, %self.up_1.norm_0.mlp_gamma.weight, %self.up_1.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.23 : Tensor = aten::_convolution(%1007, %self.up_1.norm_0.mlp_beta.weight, %self.up_1.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1010 : Tensor = aten::add(%gamma.23, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1011 : Tensor = aten::mul(%normalized.21, %1010) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.35 : Tensor = aten::add(%1011, %beta.23, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.35 : Tensor = aten::leaky_relu(%input2.35, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.37 : Tensor = aten::_convolution(%input1.35, %796, %self.up_1.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.25 : Tensor = aten::batch_norm(%input0.37, %680, %680, %self.up_1.norm_1.param_free_norm.running_mean, %self.up_1.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1016 : int = aten::size(%input0.37, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1017 : int = aten::size(%input0.37, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1018 : int[] = prim::ListConstruct(%1016, %1017) %input1.37 : Tensor = aten::upsample_bilinear2d(%input0.2, %1018, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.39 : Tensor = aten::_convolution(%input1.37, %self.up_1.norm_1.mlp_shared.0.weight, %self.up_1.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1021 : Tensor = aten::relu(%input0.39) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.25 : Tensor = aten::_convolution(%1021, %self.up_1.norm_1.mlp_gamma.weight, %self.up_1.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.25 : Tensor = aten::_convolution(%1021, %self.up_1.norm_1.mlp_beta.weight, %self.up_1.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1024 : Tensor = aten::add(%gamma.25, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1025 : Tensor = aten::mul(%normalized.25, %1024) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.37 : Tensor = aten::add(%1025, %beta.25, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.39 : Tensor = aten::leaky_relu(%input2.37, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.10 : Tensor = aten::_convolution(%input2.39, %806, %self.up_1.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.10 : Tensor = aten::add(%x_s.4, %dx.10, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.10 : Tensor = aten::upsample_nearest2d(%input3.10, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.27 : Tensor = aten::batch_norm(%input.10, %680, %680, %self.up_2.norm_s.param_free_norm.running_mean, %self.up_2.norm_s.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1032 : int = aten::size(%input.10, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1033 : int = aten::size(%input.10, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1034 : int[] = prim::ListConstruct(%1032, %1033) %input1.39 : Tensor = aten::upsample_bilinear2d(%input0.2, %1034, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.41 : Tensor = aten::_convolution(%input1.39, %self.up_2.norm_s.mlp_shared.0.weight, %self.up_2.norm_s.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1037 : Tensor = aten::relu(%input0.41) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.27 : Tensor = aten::_convolution(%1037, %self.up_2.norm_s.mlp_gamma.weight, %self.up_2.norm_s.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.27 : Tensor = aten::_convolution(%1037, %self.up_2.norm_s.mlp_beta.weight, %self.up_2.norm_s.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1040 : Tensor = aten::add(%gamma.27, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1041 : Tensor = aten::mul(%normalized.27, %1040) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.41 : Tensor = aten::add(%1041, %beta.27, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %x_s.6 : Tensor = aten::_convolution(%input2.41, %816, %680, %683, %684, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input0.43 : Tensor = aten::_convolution(%input1.39, %self.up_2.norm_0.mlp_shared.0.weight, %self.up_2.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1045 : Tensor = aten::relu(%input0.43) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.29 : Tensor = aten::_convolution(%1045, %self.up_2.norm_0.mlp_gamma.weight, %self.up_2.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.29 : Tensor = aten::_convolution(%1045, %self.up_2.norm_0.mlp_beta.weight, %self.up_2.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1048 : Tensor = aten::add(%gamma.29, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1049 : Tensor = aten::mul(%normalized.27, %1048) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.43 : Tensor = aten::add(%1049, %beta.29, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.43 : Tensor = aten::leaky_relu(%input2.43, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.45 : Tensor = aten::_convolution(%input1.43, %823, %self.up_2.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.31 : Tensor = aten::batch_norm(%input0.45, %680, %680, %self.up_2.norm_1.param_free_norm.running_mean, %self.up_2.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1054 : int = aten::size(%input0.45, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1055 : int = aten::size(%input0.45, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1056 : int[] = prim::ListConstruct(%1054, %1055) %input1.45 : Tensor = aten::upsample_bilinear2d(%input0.2, %1056, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.47 : Tensor = aten::_convolution(%input1.45, %self.up_2.norm_1.mlp_shared.0.weight, %self.up_2.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1059 : Tensor = aten::relu(%input0.47) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.31 : Tensor = aten::_convolution(%1059, %self.up_2.norm_1.mlp_gamma.weight, %self.up_2.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.31 : Tensor = aten::_convolution(%1059, %self.up_2.norm_1.mlp_beta.weight, %self.up_2.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1062 : Tensor = aten::add(%gamma.31, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1063 : Tensor = aten::mul(%normalized.31, %1062) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.45 : Tensor = aten::add(%1063, %beta.31, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.47 : Tensor = aten::leaky_relu(%input2.45, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.12 : Tensor = aten::_convolution(%input2.47, %833, %self.up_2.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.12 : Tensor = aten::add(%x_s.6, %dx.12, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.1 : Tensor = aten::upsample_nearest2d(%input3.12, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.2 : Tensor = aten::batch_norm(%input.1, %680, %680, %self.up_3.norm_s.param_free_norm.running_mean, %self.up_3.norm_s.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1070 : int = aten::size(%input.1, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1071 : int = aten::size(%input.1, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1072 : int[] = prim::ListConstruct(%1070, %1071) %input1.4 : Tensor = aten::upsample_bilinear2d(%input0.2, %1072, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.4 : Tensor = aten::_convolution(%input1.4, %self.up_3.norm_s.mlp_shared.0.weight, %self.up_3.norm_s.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1075 : Tensor = aten::relu(%input0.4) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.2 : Tensor = aten::_convolution(%1075, %self.up_3.norm_s.mlp_gamma.weight, %self.up_3.norm_s.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.2 : Tensor = aten::_convolution(%1075, %self.up_3.norm_s.mlp_beta.weight, %self.up_3.norm_s.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1078 : Tensor = aten::add(%gamma.2, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1079 : Tensor = aten::mul(%normalized.2, %1078) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.4 : Tensor = aten::add(%1079, %beta.2, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %x_s.1 : Tensor = aten::_convolution(%input2.4, %843, %680, %683, %684, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input0.6 : Tensor = aten::_convolution(%input1.4, %self.up_3.norm_0.mlp_shared.0.weight, %self.up_3.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1083 : Tensor = aten::relu(%input0.6) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.4 : Tensor = aten::_convolution(%1083, %self.up_3.norm_0.mlp_gamma.weight, %self.up_3.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.4 : Tensor = aten::_convolution(%1083, %self.up_3.norm_0.mlp_beta.weight, %self.up_3.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1086 : Tensor = aten::add(%gamma.4, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1087 : Tensor = aten::mul(%normalized.2, %1086) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.6 : Tensor = aten::add(%1087, %beta.4, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.2 : Tensor = aten::leaky_relu(%input2.6, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.8 : Tensor = aten::_convolution(%input1.2, %850, %self.up_3.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.1 : Tensor = aten::batch_norm(%input0.8, %680, %680, %self.up_3.norm_1.param_free_norm.running_mean, %self.up_3.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1092 : int = aten::size(%input0.8, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1093 : int = aten::size(%input0.8, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1094 : int[] = prim::ListConstruct(%1092, %1093) %input1.1 : Tensor = aten::upsample_bilinear2d(%input0.2, %1094, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.49 : Tensor = aten::_convolution(%input1.1, %self.up_3.norm_1.mlp_shared.0.weight, %self.up_3.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1097 : Tensor = aten::relu(%input0.49) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.1 : Tensor = aten::_convolution(%1097, %self.up_3.norm_1.mlp_gamma.weight, %self.up_3.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.1 : Tensor = aten::_convolution(%1097, %self.up_3.norm_1.mlp_beta.weight, %self.up_3.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1100 : Tensor = aten::add(%gamma.1, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1101 : Tensor = aten::mul(%normalized.1, %1100) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.1 : Tensor = aten::add(%1101, %beta.1, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.2 : Tensor = aten::leaky_relu(%input2.1, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.1 : Tensor = aten::_convolution(%input2.2, %860, %self.up_3.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.1 : Tensor = aten::add(%x_s.1, %dx.1, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input2.5 : Tensor = aten::leaky_relu(%input3.1, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.1 : Tensor = aten::_convolution(%input2.5, %self.conv_img.weight, %self.conv_img.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %433 : Tensor = aten::tanh(%input0.1) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1699:0 return (%433) DEBUG: [TRTorch] - Post unpack var: graph(%input.2 : Tensor, %input0.2 : Tensor): %678 : int[] = prim::Constant[value=[16, 16]]() %679 : bool = prim::Constant[value=0]() %680 : NoneType = prim::Constant() %self.fc.weight : Float(1024, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.fc.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %683 : int[] = prim::Constant[value=[1, 1]]() %684 : int[] = prim::Constant[value=[0, 0]]() %685 : int = prim::Constant[value=1]() %686 : bool = prim::Constant[value=1]() %self.head_0.norm_0.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %689 : float = prim::Constant[value=0.10000000000000001]() %690 : float = prim::Constant[value=1.0000000000000001e-05]() %691 : int = prim::Constant[value=2]() %692 : int = prim::Constant[value=3]() %self.head_0.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %699 : Tensor = prim::Constant[value={1}]() %700 : float = prim::Constant[value=0.20000000000000001]() %701 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.conv_0.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %711 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.conv_1.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %713 : float[] = prim::Constant[value=[2., 2.]]() %self.G_middle_0.norm_0.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %722 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.conv_0.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %732 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.conv_1.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %742 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.conv_0.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %752 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.conv_1.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %762 : Float(512, 1024, 1, 1, strides=[1024, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %769 : Float(512, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.conv_0.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.param_free_norm.running_mean : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.param_free_norm.running_var : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_gamma.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_gamma.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_beta.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_beta.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %779 : Float(512, 512, 3, 3, strides=[4608, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.conv_1.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.param_free_norm.running_mean : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.param_free_norm.running_var : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_gamma.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_gamma.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_beta.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_beta.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %789 : Float(256, 512, 1, 1, strides=[512, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_gamma.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_gamma.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_beta.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_beta.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %796 : Float(256, 512, 3, 3, strides=[4608, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.conv_0.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.param_free_norm.running_mean : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.param_free_norm.running_var : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_gamma.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_gamma.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_beta.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_beta.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %806 : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.conv_1.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.param_free_norm.running_mean : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.param_free_norm.running_var : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_gamma.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_gamma.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_beta.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_beta.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %816 : Float(128, 256, 1, 1, strides=[256, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_gamma.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_gamma.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_beta.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_beta.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %823 : Float(128, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.conv_0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.param_free_norm.running_mean : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.param_free_norm.running_var : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_gamma.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_gamma.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_beta.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_beta.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %833 : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.conv_1.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.param_free_norm.running_mean : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.param_free_norm.running_var : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_gamma.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_gamma.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_beta.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_beta.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %843 : Float(64, 128, 1, 1, strides=[128, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_gamma.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_gamma.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_beta.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_beta.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %850 : Float(64, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.conv_0.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.param_free_norm.running_mean : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.param_free_norm.running_var : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_gamma.weight : Float(64, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_gamma.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_beta.weight : Float(64, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_beta.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %860 : Float(64, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.conv_1.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.conv_img.weight : Float(3, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.conv_img.bias : Float(3, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value= 0.1178 -0.0640 -0.1034 [ CUDAFloatType{3} ]]() %input1.3 : Tensor = aten::upsample_bilinear2d(%input0.2, %678, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.5 : Tensor = aten::_convolution(%input1.3, %self.fc.weight, %self.fc.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.3 : Tensor = aten::batch_norm(%input0.5, %680, %680, %self.head_0.norm_0.param_free_norm.running_mean, %self.head_0.norm_0.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %867 : int = aten::size(%input0.5, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %868 : int = aten::size(%input0.5, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %869 : int[] = prim::ListConstruct(%867, %868) %input1.5 : Tensor = aten::upsample_bilinear2d(%input0.2, %869, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.7 : Tensor = aten::_convolution(%input1.5, %self.head_0.norm_0.mlp_shared.0.weight, %self.head_0.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %872 : Tensor = aten::relu(%input0.7) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.3 : Tensor = aten::_convolution(%872, %self.head_0.norm_0.mlp_gamma.weight, %self.head_0.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.3 : Tensor = aten::_convolution(%872, %self.head_0.norm_0.mlp_beta.weight, %self.head_0.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %875 : Tensor = aten::add(%gamma.3, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %876 : Tensor = aten::mul(%normalized.3, %875) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.7 : Tensor = aten::add(%876, %beta.3, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.7 : Tensor = aten::leaky_relu(%input2.7, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.9 : Tensor = aten::_convolution(%input1.7, %701, %self.head_0.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.5 : Tensor = aten::batch_norm(%input0.9, %680, %680, %self.head_0.norm_1.param_free_norm.running_mean, %self.head_0.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %881 : int = aten::size(%input0.9, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %882 : int = aten::size(%input0.9, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %883 : int[] = prim::ListConstruct(%881, %882) %input1.9 : Tensor = aten::upsample_bilinear2d(%input0.2, %883, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.11 : Tensor = aten::_convolution(%input1.9, %self.head_0.norm_1.mlp_shared.0.weight, %self.head_0.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %886 : Tensor = aten::relu(%input0.11) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.5 : Tensor = aten::_convolution(%886, %self.head_0.norm_1.mlp_gamma.weight, %self.head_0.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.5 : Tensor = aten::_convolution(%886, %self.head_0.norm_1.mlp_beta.weight, %self.head_0.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %889 : Tensor = aten::add(%gamma.5, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %890 : Tensor = aten::mul(%normalized.5, %889) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.9 : Tensor = aten::add(%890, %beta.5, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.11 : Tensor = aten::leaky_relu(%input2.9, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.2 : Tensor = aten::_convolution(%input2.11, %711, %self.head_0.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.2 : Tensor = aten::add(%input0.5, %dx.2, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.4 : Tensor = aten::upsample_nearest2d(%input3.2, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.7 : Tensor = aten::batch_norm(%input.4, %680, %680, %self.G_middle_0.norm_0.param_free_norm.running_mean, %self.G_middle_0.norm_0.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %897 : int = aten::size(%input.4, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %898 : int = aten::size(%input.4, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %899 : int[] = prim::ListConstruct(%897, %898) %input1.11 : Tensor = aten::upsample_bilinear2d(%input0.2, %899, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.13 : Tensor = aten::_convolution(%input1.11, %self.G_middle_0.norm_0.mlp_shared.0.weight, %self.G_middle_0.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %902 : Tensor = aten::relu(%input0.13) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.7 : Tensor = aten::_convolution(%902, %self.G_middle_0.norm_0.mlp_gamma.weight, %self.G_middle_0.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.7 : Tensor = aten::_convolution(%902, %self.G_middle_0.norm_0.mlp_beta.weight, %self.G_middle_0.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %905 : Tensor = aten::add(%gamma.7, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %906 : Tensor = aten::mul(%normalized.7, %905) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.13 : Tensor = aten::add(%906, %beta.7, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.13 : Tensor = aten::leaky_relu(%input2.13, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.15 : Tensor = aten::_convolution(%input1.13, %722, %self.G_middle_0.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.9 : Tensor = aten::batch_norm(%input0.15, %680, %680, %self.G_middle_0.norm_1.param_free_norm.running_mean, %self.G_middle_0.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %911 : int = aten::size(%input0.15, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %912 : int = aten::size(%input0.15, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %913 : int[] = prim::ListConstruct(%911, %912) %input1.15 : Tensor = aten::upsample_bilinear2d(%input0.2, %913, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.17 : Tensor = aten::_convolution(%input1.15, %self.G_middle_0.norm_1.mlp_shared.0.weight, %self.G_middle_0.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %916 : Tensor = aten::relu(%input0.17) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.9 : Tensor = aten::_convolution(%916, %self.G_middle_0.norm_1.mlp_gamma.weight, %self.G_middle_0.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.9 : Tensor = aten::_convolution(%916, %self.G_middle_0.norm_1.mlp_beta.weight, %self.G_middle_0.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %919 : Tensor = aten::add(%gamma.9, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %920 : Tensor = aten::mul(%normalized.9, %919) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.15 : Tensor = aten::add(%920, %beta.9, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.17 : Tensor = aten::leaky_relu(%input2.15, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.4 : Tensor = aten::_convolution(%input2.17, %732, %self.G_middle_0.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.4 : Tensor = aten::add(%input.4, %dx.4, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %normalized.11 : Tensor = aten::batch_norm(%input3.4, %680, %680, %self.G_middle_1.norm_0.param_free_norm.running_mean, %self.G_middle_1.norm_0.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %926 : int = aten::size(%input3.4, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %927 : int = aten::size(%input3.4, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %928 : int[] = prim::ListConstruct(%926, %927) %input1.17 : Tensor = aten::upsample_bilinear2d(%input0.2, %928, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.19 : Tensor = aten::_convolution(%input1.17, %self.G_middle_1.norm_0.mlp_shared.0.weight, %self.G_middle_1.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %931 : Tensor = aten::relu(%input0.19) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.11 : Tensor = aten::_convolution(%931, %self.G_middle_1.norm_0.mlp_gamma.weight, %self.G_middle_1.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.11 : Tensor = aten::_convolution(%931, %self.G_middle_1.norm_0.mlp_beta.weight, %self.G_middle_1.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %934 : Tensor = aten::add(%gamma.11, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %935 : Tensor = aten::mul(%normalized.11, %934) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.19 : Tensor = aten::add(%935, %beta.11, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.19 : Tensor = aten::leaky_relu(%input2.19, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.21 : Tensor = aten::_convolution(%input1.19, %742, %self.G_middle_1.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.13 : Tensor = aten::batch_norm(%input0.21, %680, %680, %self.G_middle_1.norm_1.param_free_norm.running_mean, %self.G_middle_1.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %940 : int = aten::size(%input0.21, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %941 : int = aten::size(%input0.21, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %942 : int[] = prim::ListConstruct(%940, %941) %input1.21 : Tensor = aten::upsample_bilinear2d(%input0.2, %942, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.23 : Tensor = aten::_convolution(%input1.21, %self.G_middle_1.norm_1.mlp_shared.0.weight, %self.G_middle_1.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %945 : Tensor = aten::relu(%input0.23) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.13 : Tensor = aten::_convolution(%945, %self.G_middle_1.norm_1.mlp_gamma.weight, %self.G_middle_1.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.13 : Tensor = aten::_convolution(%945, %self.G_middle_1.norm_1.mlp_beta.weight, %self.G_middle_1.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %948 : Tensor = aten::add(%gamma.13, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %949 : Tensor = aten::mul(%normalized.13, %948) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.21 : Tensor = aten::add(%949, %beta.13, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.23 : Tensor = aten::leaky_relu(%input2.21, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.6 : Tensor = aten::_convolution(%input2.23, %752, %self.G_middle_1.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.6 : Tensor = aten::add(%input3.4, %dx.6, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.6 : Tensor = aten::upsample_nearest2d(%input3.6, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.15 : Tensor = aten::batch_norm(%input.6, %680, %680, %self.up_0.norm_s.param_free_norm.running_mean, %self.up_0.norm_s.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %956 : int = aten::size(%input.6, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %957 : int = aten::size(%input.6, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %958 : int[] = prim::ListConstruct(%956, %957) %input1.23 : Tensor = aten::upsample_bilinear2d(%input0.2, %958, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.25 : Tensor = aten::_convolution(%input1.23, %self.up_0.norm_s.mlp_shared.0.weight, %self.up_0.norm_s.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %961 : Tensor = aten::relu(%input0.25) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.15 : Tensor = aten::_convolution(%961, %self.up_0.norm_s.mlp_gamma.weight, %self.up_0.norm_s.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.15 : Tensor = aten::_convolution(%961, %self.up_0.norm_s.mlp_beta.weight, %self.up_0.norm_s.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %964 : Tensor = aten::add(%gamma.15, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %965 : Tensor = aten::mul(%normalized.15, %964) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.25 : Tensor = aten::add(%965, %beta.15, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %x_s.2 : Tensor = aten::_convolution(%input2.25, %762, %680, %683, %684, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input0.27 : Tensor = aten::_convolution(%input1.23, %self.up_0.norm_0.mlp_shared.0.weight, %self.up_0.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %969 : Tensor = aten::relu(%input0.27) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.17 : Tensor = aten::_convolution(%969, %self.up_0.norm_0.mlp_gamma.weight, %self.up_0.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.17 : Tensor = aten::_convolution(%969, %self.up_0.norm_0.mlp_beta.weight, %self.up_0.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %972 : Tensor = aten::add(%gamma.17, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %973 : Tensor = aten::mul(%normalized.15, %972) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.27 : Tensor = aten::add(%973, %beta.17, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.27 : Tensor = aten::leaky_relu(%input2.27, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.29 : Tensor = aten::_convolution(%input1.27, %769, %self.up_0.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.19 : Tensor = aten::batch_norm(%input0.29, %680, %680, %self.up_0.norm_1.param_free_norm.running_mean, %self.up_0.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %978 : int = aten::size(%input0.29, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %979 : int = aten::size(%input0.29, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %980 : int[] = prim::ListConstruct(%978, %979) %input1.29 : Tensor = aten::upsample_bilinear2d(%input0.2, %980, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.31 : Tensor = aten::_convolution(%input1.29, %self.up_0.norm_1.mlp_shared.0.weight, %self.up_0.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %983 : Tensor = aten::relu(%input0.31) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.19 : Tensor = aten::_convolution(%983, %self.up_0.norm_1.mlp_gamma.weight, %self.up_0.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.19 : Tensor = aten::_convolution(%983, %self.up_0.norm_1.mlp_beta.weight, %self.up_0.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %986 : Tensor = aten::add(%gamma.19, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %987 : Tensor = aten::mul(%normalized.19, %986) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.29 : Tensor = aten::add(%987, %beta.19, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.31 : Tensor = aten::leaky_relu(%input2.29, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.8 : Tensor = aten::_convolution(%input2.31, %779, %self.up_0.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.8 : Tensor = aten::add(%x_s.2, %dx.8, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.8 : Tensor = aten::upsample_nearest2d(%input3.8, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.21 : Tensor = aten::batch_norm(%input.8, %680, %680, %self.up_1.norm_s.param_free_norm.running_mean, %self.up_1.norm_s.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %994 : int = aten::size(%input.8, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %995 : int = aten::size(%input.8, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %996 : int[] = prim::ListConstruct(%994, %995) %input1.31 : Tensor = aten::upsample_bilinear2d(%input0.2, %996, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.33 : Tensor = aten::_convolution(%input1.31, %self.up_1.norm_s.mlp_shared.0.weight, %self.up_1.norm_s.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %999 : Tensor = aten::relu(%input0.33) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.21 : Tensor = aten::_convolution(%999, %self.up_1.norm_s.mlp_gamma.weight, %self.up_1.norm_s.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.21 : Tensor = aten::_convolution(%999, %self.up_1.norm_s.mlp_beta.weight, %self.up_1.norm_s.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1002 : Tensor = aten::add(%gamma.21, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1003 : Tensor = aten::mul(%normalized.21, %1002) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.33 : Tensor = aten::add(%1003, %beta.21, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %x_s.4 : Tensor = aten::_convolution(%input2.33, %789, %680, %683, %684, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input0.35 : Tensor = aten::_convolution(%input1.31, %self.up_1.norm_0.mlp_shared.0.weight, %self.up_1.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1007 : Tensor = aten::relu(%input0.35) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.23 : Tensor = aten::_convolution(%1007, %self.up_1.norm_0.mlp_gamma.weight, %self.up_1.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.23 : Tensor = aten::_convolution(%1007, %self.up_1.norm_0.mlp_beta.weight, %self.up_1.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1010 : Tensor = aten::add(%gamma.23, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1011 : Tensor = aten::mul(%normalized.21, %1010) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.35 : Tensor = aten::add(%1011, %beta.23, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.35 : Tensor = aten::leaky_relu(%input2.35, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.37 : Tensor = aten::_convolution(%input1.35, %796, %self.up_1.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.25 : Tensor = aten::batch_norm(%input0.37, %680, %680, %self.up_1.norm_1.param_free_norm.running_mean, %self.up_1.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1016 : int = aten::size(%input0.37, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1017 : int = aten::size(%input0.37, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1018 : int[] = prim::ListConstruct(%1016, %1017) %input1.37 : Tensor = aten::upsample_bilinear2d(%input0.2, %1018, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.39 : Tensor = aten::_convolution(%input1.37, %self.up_1.norm_1.mlp_shared.0.weight, %self.up_1.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1021 : Tensor = aten::relu(%input0.39) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.25 : Tensor = aten::_convolution(%1021, %self.up_1.norm_1.mlp_gamma.weight, %self.up_1.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.25 : Tensor = aten::_convolution(%1021, %self.up_1.norm_1.mlp_beta.weight, %self.up_1.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1024 : Tensor = aten::add(%gamma.25, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1025 : Tensor = aten::mul(%normalized.25, %1024) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.37 : Tensor = aten::add(%1025, %beta.25, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.39 : Tensor = aten::leaky_relu(%input2.37, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.10 : Tensor = aten::_convolution(%input2.39, %806, %self.up_1.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.10 : Tensor = aten::add(%x_s.4, %dx.10, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.10 : Tensor = aten::upsample_nearest2d(%input3.10, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.27 : Tensor = aten::batch_norm(%input.10, %680, %680, %self.up_2.norm_s.param_free_norm.running_mean, %self.up_2.norm_s.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1032 : int = aten::size(%input.10, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1033 : int = aten::size(%input.10, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1034 : int[] = prim::ListConstruct(%1032, %1033) %input1.39 : Tensor = aten::upsample_bilinear2d(%input0.2, %1034, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.41 : Tensor = aten::_convolution(%input1.39, %self.up_2.norm_s.mlp_shared.0.weight, %self.up_2.norm_s.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1037 : Tensor = aten::relu(%input0.41) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.27 : Tensor = aten::_convolution(%1037, %self.up_2.norm_s.mlp_gamma.weight, %self.up_2.norm_s.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.27 : Tensor = aten::_convolution(%1037, %self.up_2.norm_s.mlp_beta.weight, %self.up_2.norm_s.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1040 : Tensor = aten::add(%gamma.27, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1041 : Tensor = aten::mul(%normalized.27, %1040) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.41 : Tensor = aten::add(%1041, %beta.27, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %x_s.6 : Tensor = aten::_convolution(%input2.41, %816, %680, %683, %684, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input0.43 : Tensor = aten::_convolution(%input1.39, %self.up_2.norm_0.mlp_shared.0.weight, %self.up_2.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1045 : Tensor = aten::relu(%input0.43) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.29 : Tensor = aten::_convolution(%1045, %self.up_2.norm_0.mlp_gamma.weight, %self.up_2.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.29 : Tensor = aten::_convolution(%1045, %self.up_2.norm_0.mlp_beta.weight, %self.up_2.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1048 : Tensor = aten::add(%gamma.29, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1049 : Tensor = aten::mul(%normalized.27, %1048) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.43 : Tensor = aten::add(%1049, %beta.29, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.43 : Tensor = aten::leaky_relu(%input2.43, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.45 : Tensor = aten::_convolution(%input1.43, %823, %self.up_2.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.31 : Tensor = aten::batch_norm(%input0.45, %680, %680, %self.up_2.norm_1.param_free_norm.running_mean, %self.up_2.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1054 : int = aten::size(%input0.45, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1055 : int = aten::size(%input0.45, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1056 : int[] = prim::ListConstruct(%1054, %1055) %input1.45 : Tensor = aten::upsample_bilinear2d(%input0.2, %1056, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.47 : Tensor = aten::_convolution(%input1.45, %self.up_2.norm_1.mlp_shared.0.weight, %self.up_2.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1059 : Tensor = aten::relu(%input0.47) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.31 : Tensor = aten::_convolution(%1059, %self.up_2.norm_1.mlp_gamma.weight, %self.up_2.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.31 : Tensor = aten::_convolution(%1059, %self.up_2.norm_1.mlp_beta.weight, %self.up_2.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1062 : Tensor = aten::add(%gamma.31, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1063 : Tensor = aten::mul(%normalized.31, %1062) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.45 : Tensor = aten::add(%1063, %beta.31, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.47 : Tensor = aten::leaky_relu(%input2.45, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.12 : Tensor = aten::_convolution(%input2.47, %833, %self.up_2.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.12 : Tensor = aten::add(%x_s.6, %dx.12, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.1 : Tensor = aten::upsample_nearest2d(%input3.12, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.2 : Tensor = aten::batch_norm(%input.1, %680, %680, %self.up_3.norm_s.param_free_norm.running_mean, %self.up_3.norm_s.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1070 : int = aten::size(%input.1, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1071 : int = aten::size(%input.1, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1072 : int[] = prim::ListConstruct(%1070, %1071) %input1.4 : Tensor = aten::upsample_bilinear2d(%input0.2, %1072, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.4 : Tensor = aten::_convolution(%input1.4, %self.up_3.norm_s.mlp_shared.0.weight, %self.up_3.norm_s.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1075 : Tensor = aten::relu(%input0.4) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.2 : Tensor = aten::_convolution(%1075, %self.up_3.norm_s.mlp_gamma.weight, %self.up_3.norm_s.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.2 : Tensor = aten::_convolution(%1075, %self.up_3.norm_s.mlp_beta.weight, %self.up_3.norm_s.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1078 : Tensor = aten::add(%gamma.2, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1079 : Tensor = aten::mul(%normalized.2, %1078) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.4 : Tensor = aten::add(%1079, %beta.2, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %x_s.1 : Tensor = aten::_convolution(%input2.4, %843, %680, %683, %684, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input0.6 : Tensor = aten::_convolution(%input1.4, %self.up_3.norm_0.mlp_shared.0.weight, %self.up_3.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1083 : Tensor = aten::relu(%input0.6) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.4 : Tensor = aten::_convolution(%1083, %self.up_3.norm_0.mlp_gamma.weight, %self.up_3.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.4 : Tensor = aten::_convolution(%1083, %self.up_3.norm_0.mlp_beta.weight, %self.up_3.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1086 : Tensor = aten::add(%gamma.4, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1087 : Tensor = aten::mul(%normalized.2, %1086) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.6 : Tensor = aten::add(%1087, %beta.4, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.2 : Tensor = aten::leaky_relu(%input2.6, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.8 : Tensor = aten::_convolution(%input1.2, %850, %self.up_3.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.1 : Tensor = aten::batch_norm(%input0.8, %680, %680, %self.up_3.norm_1.param_free_norm.running_mean, %self.up_3.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1092 : int = aten::size(%input0.8, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1093 : int = aten::size(%input0.8, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1094 : int[] = prim::ListConstruct(%1092, %1093) %input1.1 : Tensor = aten::upsample_bilinear2d(%input0.2, %1094, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.49 : Tensor = aten::_convolution(%input1.1, %self.up_3.norm_1.mlp_shared.0.weight, %self.up_3.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1097 : Tensor = aten::relu(%input0.49) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.1 : Tensor = aten::_convolution(%1097, %self.up_3.norm_1.mlp_gamma.weight, %self.up_3.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.1 : Tensor = aten::_convolution(%1097, %self.up_3.norm_1.mlp_beta.weight, %self.up_3.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1100 : Tensor = aten::add(%gamma.1, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1101 : Tensor = aten::mul(%normalized.1, %1100) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.1 : Tensor = aten::add(%1101, %beta.1, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.2 : Tensor = aten::leaky_relu(%input2.1, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.1 : Tensor = aten::_convolution(%input2.2, %860, %self.up_3.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.1 : Tensor = aten::add(%x_s.1, %dx.1, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input2.5 : Tensor = aten::leaky_relu(%input3.1, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.1 : Tensor = aten::_convolution(%input2.5, %self.conv_img.weight, %self.conv_img.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %433 : Tensor = aten::tanh(%input0.1) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1699:0 return (%433) DEBUG: [TRTorch] - RemoveNOPs - Note: Removing operators that have no meaning in TRT INFO: [TRTorch] - graph(%input.2 : Tensor, %input0.2 : Tensor): %678 : int[] = prim::Constant[value=[16, 16]]() %679 : bool = prim::Constant[value=0]() %680 : NoneType = prim::Constant() %self.fc.weight : Float(1024, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.fc.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %683 : int[] = prim::Constant[value=[1, 1]]() %684 : int[] = prim::Constant[value=[0, 0]]() %685 : int = prim::Constant[value=1]() %686 : bool = prim::Constant[value=1]() %self.head_0.norm_0.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %689 : float = prim::Constant[value=0.10000000000000001]() %690 : float = prim::Constant[value=1.0000000000000001e-05]() %691 : int = prim::Constant[value=2]() %692 : int = prim::Constant[value=3]() %self.head_0.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %699 : Tensor = prim::Constant[value={1}]() %700 : float = prim::Constant[value=0.20000000000000001]() %701 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.conv_0.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.norm_1.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %711 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.head_0.conv_1.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %713 : float[] = prim::Constant[value=[2., 2.]]() %self.G_middle_0.norm_0.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %722 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.conv_0.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.norm_1.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %732 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_0.conv_1.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %742 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.conv_0.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.norm_1.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %752 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.G_middle_1.conv_1.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_s.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %762 : Float(512, 1024, 1, 1, strides=[1024, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %769 : Float(512, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.conv_0.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.param_free_norm.running_mean : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.param_free_norm.running_var : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_gamma.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_gamma.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_beta.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.norm_1.mlp_beta.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %779 : Float(512, 512, 3, 3, strides=[4608, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_0.conv_1.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.param_free_norm.running_mean : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.param_free_norm.running_var : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_gamma.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_gamma.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_beta.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_s.mlp_beta.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %789 : Float(256, 512, 1, 1, strides=[512, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_gamma.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_gamma.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_beta.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_0.mlp_beta.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %796 : Float(256, 512, 3, 3, strides=[4608, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.conv_0.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.param_free_norm.running_mean : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.param_free_norm.running_var : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_gamma.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_gamma.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_beta.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.norm_1.mlp_beta.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %806 : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_1.conv_1.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.param_free_norm.running_mean : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.param_free_norm.running_var : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_gamma.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_gamma.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_beta.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_s.mlp_beta.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %816 : Float(128, 256, 1, 1, strides=[256, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_gamma.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_gamma.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_beta.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_0.mlp_beta.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %823 : Float(128, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.conv_0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.param_free_norm.running_mean : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.param_free_norm.running_var : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_gamma.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_gamma.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_beta.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.norm_1.mlp_beta.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %833 : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_2.conv_1.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.param_free_norm.running_mean : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.param_free_norm.running_var : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_gamma.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_gamma.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_beta.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_s.mlp_beta.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %843 : Float(64, 128, 1, 1, strides=[128, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_gamma.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_gamma.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_beta.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_0.mlp_beta.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %850 : Float(64, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.conv_0.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.param_free_norm.running_mean : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.param_free_norm.running_var : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_gamma.weight : Float(64, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_gamma.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_beta.weight : Float(64, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.norm_1.mlp_beta.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %860 : Float(64, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.up_3.conv_1.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.conv_img.weight : Float(3, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() %self.conv_img.bias : Float(3, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value= 0.1178 -0.0640 -0.1034 [ CUDAFloatType{3} ]]() %input1.3 : Tensor = aten::upsample_bilinear2d(%input0.2, %678, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.5 : Tensor = aten::_convolution(%input1.3, %self.fc.weight, %self.fc.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.3 : Tensor = aten::batch_norm(%input0.5, %680, %680, %self.head_0.norm_0.param_free_norm.running_mean, %self.head_0.norm_0.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %867 : int = aten::size(%input0.5, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %868 : int = aten::size(%input0.5, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %869 : int[] = prim::ListConstruct(%867, %868) %input1.5 : Tensor = aten::upsample_bilinear2d(%input0.2, %869, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.7 : Tensor = aten::_convolution(%input1.5, %self.head_0.norm_0.mlp_shared.0.weight, %self.head_0.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %872 : Tensor = aten::relu(%input0.7) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.3 : Tensor = aten::_convolution(%872, %self.head_0.norm_0.mlp_gamma.weight, %self.head_0.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.3 : Tensor = aten::_convolution(%872, %self.head_0.norm_0.mlp_beta.weight, %self.head_0.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %875 : Tensor = aten::add(%gamma.3, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %876 : Tensor = aten::mul(%normalized.3, %875) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.7 : Tensor = aten::add(%876, %beta.3, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.7 : Tensor = aten::leaky_relu(%input2.7, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.9 : Tensor = aten::_convolution(%input1.7, %701, %self.head_0.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.5 : Tensor = aten::batch_norm(%input0.9, %680, %680, %self.head_0.norm_1.param_free_norm.running_mean, %self.head_0.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %881 : int = aten::size(%input0.9, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %882 : int = aten::size(%input0.9, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %883 : int[] = prim::ListConstruct(%881, %882) %input1.9 : Tensor = aten::upsample_bilinear2d(%input0.2, %883, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.11 : Tensor = aten::_convolution(%input1.9, %self.head_0.norm_1.mlp_shared.0.weight, %self.head_0.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %886 : Tensor = aten::relu(%input0.11) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.5 : Tensor = aten::_convolution(%886, %self.head_0.norm_1.mlp_gamma.weight, %self.head_0.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.5 : Tensor = aten::_convolution(%886, %self.head_0.norm_1.mlp_beta.weight, %self.head_0.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %889 : Tensor = aten::add(%gamma.5, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %890 : Tensor = aten::mul(%normalized.5, %889) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.9 : Tensor = aten::add(%890, %beta.5, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.11 : Tensor = aten::leaky_relu(%input2.9, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.2 : Tensor = aten::_convolution(%input2.11, %711, %self.head_0.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.2 : Tensor = aten::add(%input0.5, %dx.2, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.4 : Tensor = aten::upsample_nearest2d(%input3.2, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.7 : Tensor = aten::batch_norm(%input.4, %680, %680, %self.G_middle_0.norm_0.param_free_norm.running_mean, %self.G_middle_0.norm_0.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %897 : int = aten::size(%input.4, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %898 : int = aten::size(%input.4, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %899 : int[] = prim::ListConstruct(%897, %898) %input1.11 : Tensor = aten::upsample_bilinear2d(%input0.2, %899, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.13 : Tensor = aten::_convolution(%input1.11, %self.G_middle_0.norm_0.mlp_shared.0.weight, %self.G_middle_0.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %902 : Tensor = aten::relu(%input0.13) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.7 : Tensor = aten::_convolution(%902, %self.G_middle_0.norm_0.mlp_gamma.weight, %self.G_middle_0.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.7 : Tensor = aten::_convolution(%902, %self.G_middle_0.norm_0.mlp_beta.weight, %self.G_middle_0.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %905 : Tensor = aten::add(%gamma.7, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %906 : Tensor = aten::mul(%normalized.7, %905) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.13 : Tensor = aten::add(%906, %beta.7, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.13 : Tensor = aten::leaky_relu(%input2.13, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.15 : Tensor = aten::_convolution(%input1.13, %722, %self.G_middle_0.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.9 : Tensor = aten::batch_norm(%input0.15, %680, %680, %self.G_middle_0.norm_1.param_free_norm.running_mean, %self.G_middle_0.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %911 : int = aten::size(%input0.15, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %912 : int = aten::size(%input0.15, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %913 : int[] = prim::ListConstruct(%911, %912) %input1.15 : Tensor = aten::upsample_bilinear2d(%input0.2, %913, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.17 : Tensor = aten::_convolution(%input1.15, %self.G_middle_0.norm_1.mlp_shared.0.weight, %self.G_middle_0.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %916 : Tensor = aten::relu(%input0.17) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.9 : Tensor = aten::_convolution(%916, %self.G_middle_0.norm_1.mlp_gamma.weight, %self.G_middle_0.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.9 : Tensor = aten::_convolution(%916, %self.G_middle_0.norm_1.mlp_beta.weight, %self.G_middle_0.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %919 : Tensor = aten::add(%gamma.9, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %920 : Tensor = aten::mul(%normalized.9, %919) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.15 : Tensor = aten::add(%920, %beta.9, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.17 : Tensor = aten::leaky_relu(%input2.15, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.4 : Tensor = aten::_convolution(%input2.17, %732, %self.G_middle_0.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.4 : Tensor = aten::add(%input.4, %dx.4, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %normalized.11 : Tensor = aten::batch_norm(%input3.4, %680, %680, %self.G_middle_1.norm_0.param_free_norm.running_mean, %self.G_middle_1.norm_0.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %926 : int = aten::size(%input3.4, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %927 : int = aten::size(%input3.4, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %928 : int[] = prim::ListConstruct(%926, %927) %input1.17 : Tensor = aten::upsample_bilinear2d(%input0.2, %928, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.19 : Tensor = aten::_convolution(%input1.17, %self.G_middle_1.norm_0.mlp_shared.0.weight, %self.G_middle_1.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %931 : Tensor = aten::relu(%input0.19) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.11 : Tensor = aten::_convolution(%931, %self.G_middle_1.norm_0.mlp_gamma.weight, %self.G_middle_1.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.11 : Tensor = aten::_convolution(%931, %self.G_middle_1.norm_0.mlp_beta.weight, %self.G_middle_1.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %934 : Tensor = aten::add(%gamma.11, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %935 : Tensor = aten::mul(%normalized.11, %934) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.19 : Tensor = aten::add(%935, %beta.11, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.19 : Tensor = aten::leaky_relu(%input2.19, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.21 : Tensor = aten::_convolution(%input1.19, %742, %self.G_middle_1.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.13 : Tensor = aten::batch_norm(%input0.21, %680, %680, %self.G_middle_1.norm_1.param_free_norm.running_mean, %self.G_middle_1.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %940 : int = aten::size(%input0.21, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %941 : int = aten::size(%input0.21, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %942 : int[] = prim::ListConstruct(%940, %941) %input1.21 : Tensor = aten::upsample_bilinear2d(%input0.2, %942, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.23 : Tensor = aten::_convolution(%input1.21, %self.G_middle_1.norm_1.mlp_shared.0.weight, %self.G_middle_1.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %945 : Tensor = aten::relu(%input0.23) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.13 : Tensor = aten::_convolution(%945, %self.G_middle_1.norm_1.mlp_gamma.weight, %self.G_middle_1.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.13 : Tensor = aten::_convolution(%945, %self.G_middle_1.norm_1.mlp_beta.weight, %self.G_middle_1.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %948 : Tensor = aten::add(%gamma.13, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %949 : Tensor = aten::mul(%normalized.13, %948) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.21 : Tensor = aten::add(%949, %beta.13, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.23 : Tensor = aten::leaky_relu(%input2.21, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.6 : Tensor = aten::_convolution(%input2.23, %752, %self.G_middle_1.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.6 : Tensor = aten::add(%input3.4, %dx.6, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.6 : Tensor = aten::upsample_nearest2d(%input3.6, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.15 : Tensor = aten::batch_norm(%input.6, %680, %680, %self.up_0.norm_s.param_free_norm.running_mean, %self.up_0.norm_s.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %956 : int = aten::size(%input.6, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %957 : int = aten::size(%input.6, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %958 : int[] = prim::ListConstruct(%956, %957) %input1.23 : Tensor = aten::upsample_bilinear2d(%input0.2, %958, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.25 : Tensor = aten::_convolution(%input1.23, %self.up_0.norm_s.mlp_shared.0.weight, %self.up_0.norm_s.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %961 : Tensor = aten::relu(%input0.25) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.15 : Tensor = aten::_convolution(%961, %self.up_0.norm_s.mlp_gamma.weight, %self.up_0.norm_s.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.15 : Tensor = aten::_convolution(%961, %self.up_0.norm_s.mlp_beta.weight, %self.up_0.norm_s.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %964 : Tensor = aten::add(%gamma.15, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %965 : Tensor = aten::mul(%normalized.15, %964) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.25 : Tensor = aten::add(%965, %beta.15, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %x_s.2 : Tensor = aten::_convolution(%input2.25, %762, %680, %683, %684, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input0.27 : Tensor = aten::_convolution(%input1.23, %self.up_0.norm_0.mlp_shared.0.weight, %self.up_0.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %969 : Tensor = aten::relu(%input0.27) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.17 : Tensor = aten::_convolution(%969, %self.up_0.norm_0.mlp_gamma.weight, %self.up_0.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.17 : Tensor = aten::_convolution(%969, %self.up_0.norm_0.mlp_beta.weight, %self.up_0.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %972 : Tensor = aten::add(%gamma.17, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %973 : Tensor = aten::mul(%normalized.15, %972) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.27 : Tensor = aten::add(%973, %beta.17, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.27 : Tensor = aten::leaky_relu(%input2.27, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.29 : Tensor = aten::_convolution(%input1.27, %769, %self.up_0.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.19 : Tensor = aten::batch_norm(%input0.29, %680, %680, %self.up_0.norm_1.param_free_norm.running_mean, %self.up_0.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %978 : int = aten::size(%input0.29, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %979 : int = aten::size(%input0.29, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %980 : int[] = prim::ListConstruct(%978, %979) %input1.29 : Tensor = aten::upsample_bilinear2d(%input0.2, %980, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.31 : Tensor = aten::_convolution(%input1.29, %self.up_0.norm_1.mlp_shared.0.weight, %self.up_0.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %983 : Tensor = aten::relu(%input0.31) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.19 : Tensor = aten::_convolution(%983, %self.up_0.norm_1.mlp_gamma.weight, %self.up_0.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.19 : Tensor = aten::_convolution(%983, %self.up_0.norm_1.mlp_beta.weight, %self.up_0.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %986 : Tensor = aten::add(%gamma.19, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %987 : Tensor = aten::mul(%normalized.19, %986) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.29 : Tensor = aten::add(%987, %beta.19, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.31 : Tensor = aten::leaky_relu(%input2.29, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.8 : Tensor = aten::_convolution(%input2.31, %779, %self.up_0.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.8 : Tensor = aten::add(%x_s.2, %dx.8, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.8 : Tensor = aten::upsample_nearest2d(%input3.8, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.21 : Tensor = aten::batch_norm(%input.8, %680, %680, %self.up_1.norm_s.param_free_norm.running_mean, %self.up_1.norm_s.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %994 : int = aten::size(%input.8, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %995 : int = aten::size(%input.8, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %996 : int[] = prim::ListConstruct(%994, %995) %input1.31 : Tensor = aten::upsample_bilinear2d(%input0.2, %996, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.33 : Tensor = aten::_convolution(%input1.31, %self.up_1.norm_s.mlp_shared.0.weight, %self.up_1.norm_s.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %999 : Tensor = aten::relu(%input0.33) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.21 : Tensor = aten::_convolution(%999, %self.up_1.norm_s.mlp_gamma.weight, %self.up_1.norm_s.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.21 : Tensor = aten::_convolution(%999, %self.up_1.norm_s.mlp_beta.weight, %self.up_1.norm_s.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1002 : Tensor = aten::add(%gamma.21, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1003 : Tensor = aten::mul(%normalized.21, %1002) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.33 : Tensor = aten::add(%1003, %beta.21, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %x_s.4 : Tensor = aten::_convolution(%input2.33, %789, %680, %683, %684, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input0.35 : Tensor = aten::_convolution(%input1.31, %self.up_1.norm_0.mlp_shared.0.weight, %self.up_1.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1007 : Tensor = aten::relu(%input0.35) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.23 : Tensor = aten::_convolution(%1007, %self.up_1.norm_0.mlp_gamma.weight, %self.up_1.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.23 : Tensor = aten::_convolution(%1007, %self.up_1.norm_0.mlp_beta.weight, %self.up_1.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1010 : Tensor = aten::add(%gamma.23, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1011 : Tensor = aten::mul(%normalized.21, %1010) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.35 : Tensor = aten::add(%1011, %beta.23, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.35 : Tensor = aten::leaky_relu(%input2.35, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.37 : Tensor = aten::_convolution(%input1.35, %796, %self.up_1.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.25 : Tensor = aten::batch_norm(%input0.37, %680, %680, %self.up_1.norm_1.param_free_norm.running_mean, %self.up_1.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1016 : int = aten::size(%input0.37, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1017 : int = aten::size(%input0.37, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1018 : int[] = prim::ListConstruct(%1016, %1017) %input1.37 : Tensor = aten::upsample_bilinear2d(%input0.2, %1018, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.39 : Tensor = aten::_convolution(%input1.37, %self.up_1.norm_1.mlp_shared.0.weight, %self.up_1.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1021 : Tensor = aten::relu(%input0.39) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.25 : Tensor = aten::_convolution(%1021, %self.up_1.norm_1.mlp_gamma.weight, %self.up_1.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.25 : Tensor = aten::_convolution(%1021, %self.up_1.norm_1.mlp_beta.weight, %self.up_1.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1024 : Tensor = aten::add(%gamma.25, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1025 : Tensor = aten::mul(%normalized.25, %1024) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.37 : Tensor = aten::add(%1025, %beta.25, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.39 : Tensor = aten::leaky_relu(%input2.37, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.10 : Tensor = aten::_convolution(%input2.39, %806, %self.up_1.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.10 : Tensor = aten::add(%x_s.4, %dx.10, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.10 : Tensor = aten::upsample_nearest2d(%input3.10, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.27 : Tensor = aten::batch_norm(%input.10, %680, %680, %self.up_2.norm_s.param_free_norm.running_mean, %self.up_2.norm_s.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1032 : int = aten::size(%input.10, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1033 : int = aten::size(%input.10, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1034 : int[] = prim::ListConstruct(%1032, %1033) %input1.39 : Tensor = aten::upsample_bilinear2d(%input0.2, %1034, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.41 : Tensor = aten::_convolution(%input1.39, %self.up_2.norm_s.mlp_shared.0.weight, %self.up_2.norm_s.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1037 : Tensor = aten::relu(%input0.41) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.27 : Tensor = aten::_convolution(%1037, %self.up_2.norm_s.mlp_gamma.weight, %self.up_2.norm_s.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.27 : Tensor = aten::_convolution(%1037, %self.up_2.norm_s.mlp_beta.weight, %self.up_2.norm_s.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1040 : Tensor = aten::add(%gamma.27, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1041 : Tensor = aten::mul(%normalized.27, %1040) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.41 : Tensor = aten::add(%1041, %beta.27, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %x_s.6 : Tensor = aten::_convolution(%input2.41, %816, %680, %683, %684, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input0.43 : Tensor = aten::_convolution(%input1.39, %self.up_2.norm_0.mlp_shared.0.weight, %self.up_2.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1045 : Tensor = aten::relu(%input0.43) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.29 : Tensor = aten::_convolution(%1045, %self.up_2.norm_0.mlp_gamma.weight, %self.up_2.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.29 : Tensor = aten::_convolution(%1045, %self.up_2.norm_0.mlp_beta.weight, %self.up_2.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1048 : Tensor = aten::add(%gamma.29, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1049 : Tensor = aten::mul(%normalized.27, %1048) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.43 : Tensor = aten::add(%1049, %beta.29, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.43 : Tensor = aten::leaky_relu(%input2.43, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.45 : Tensor = aten::_convolution(%input1.43, %823, %self.up_2.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.31 : Tensor = aten::batch_norm(%input0.45, %680, %680, %self.up_2.norm_1.param_free_norm.running_mean, %self.up_2.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1054 : int = aten::size(%input0.45, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1055 : int = aten::size(%input0.45, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1056 : int[] = prim::ListConstruct(%1054, %1055) %input1.45 : Tensor = aten::upsample_bilinear2d(%input0.2, %1056, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.47 : Tensor = aten::_convolution(%input1.45, %self.up_2.norm_1.mlp_shared.0.weight, %self.up_2.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1059 : Tensor = aten::relu(%input0.47) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.31 : Tensor = aten::_convolution(%1059, %self.up_2.norm_1.mlp_gamma.weight, %self.up_2.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.31 : Tensor = aten::_convolution(%1059, %self.up_2.norm_1.mlp_beta.weight, %self.up_2.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1062 : Tensor = aten::add(%gamma.31, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1063 : Tensor = aten::mul(%normalized.31, %1062) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.45 : Tensor = aten::add(%1063, %beta.31, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.47 : Tensor = aten::leaky_relu(%input2.45, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.12 : Tensor = aten::_convolution(%input2.47, %833, %self.up_2.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.12 : Tensor = aten::add(%x_s.6, %dx.12, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input.1 : Tensor = aten::upsample_nearest2d(%input3.12, %680, %713) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3535:0 %normalized.2 : Tensor = aten::batch_norm(%input.1, %680, %680, %self.up_3.norm_s.param_free_norm.running_mean, %self.up_3.norm_s.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1070 : int = aten::size(%input.1, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1071 : int = aten::size(%input.1, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1072 : int[] = prim::ListConstruct(%1070, %1071) %input1.4 : Tensor = aten::upsample_bilinear2d(%input0.2, %1072, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.4 : Tensor = aten::_convolution(%input1.4, %self.up_3.norm_s.mlp_shared.0.weight, %self.up_3.norm_s.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1075 : Tensor = aten::relu(%input0.4) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.2 : Tensor = aten::_convolution(%1075, %self.up_3.norm_s.mlp_gamma.weight, %self.up_3.norm_s.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.2 : Tensor = aten::_convolution(%1075, %self.up_3.norm_s.mlp_beta.weight, %self.up_3.norm_s.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1078 : Tensor = aten::add(%gamma.2, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1079 : Tensor = aten::mul(%normalized.2, %1078) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.4 : Tensor = aten::add(%1079, %beta.2, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %x_s.1 : Tensor = aten::_convolution(%input2.4, %843, %680, %683, %684, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input0.6 : Tensor = aten::_convolution(%input1.4, %self.up_3.norm_0.mlp_shared.0.weight, %self.up_3.norm_0.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1083 : Tensor = aten::relu(%input0.6) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.4 : Tensor = aten::_convolution(%1083, %self.up_3.norm_0.mlp_gamma.weight, %self.up_3.norm_0.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.4 : Tensor = aten::_convolution(%1083, %self.up_3.norm_0.mlp_beta.weight, %self.up_3.norm_0.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1086 : Tensor = aten::add(%gamma.4, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1087 : Tensor = aten::mul(%normalized.2, %1086) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.6 : Tensor = aten::add(%1087, %beta.4, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input1.2 : Tensor = aten::leaky_relu(%input2.6, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.8 : Tensor = aten::_convolution(%input1.2, %850, %self.up_3.conv_0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %normalized.1 : Tensor = aten::batch_norm(%input0.8, %680, %680, %self.up_3.norm_1.param_free_norm.running_mean, %self.up_3.norm_1.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 %1092 : int = aten::size(%input0.8, %691) # Face_Enhancement/models/networks/normalization.py:88:0 %1093 : int = aten::size(%input0.8, %692) # Face_Enhancement/models/networks/normalization.py:88:0 %1094 : int[] = prim::ListConstruct(%1092, %1093) %input1.1 : Tensor = aten::upsample_bilinear2d(%input0.2, %1094, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 %input0.49 : Tensor = aten::_convolution(%input1.1, %self.up_3.norm_1.mlp_shared.0.weight, %self.up_3.norm_1.mlp_shared.0.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1097 : Tensor = aten::relu(%input0.49) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1206:0 %gamma.1 : Tensor = aten::_convolution(%1097, %self.up_3.norm_1.mlp_gamma.weight, %self.up_3.norm_1.mlp_gamma.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %beta.1 : Tensor = aten::_convolution(%1097, %self.up_3.norm_1.mlp_beta.weight, %self.up_3.norm_1.mlp_beta.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %1100 : Tensor = aten::add(%gamma.1, %699, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %1101 : Tensor = aten::mul(%normalized.1, %1100) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.1 : Tensor = aten::add(%1101, %beta.1, %685) # Face_Enhancement/models/networks/normalization.py:98:0 %input2.2 : Tensor = aten::leaky_relu(%input2.1, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %dx.1 : Tensor = aten::_convolution(%input2.2, %860, %self.up_3.conv_1.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %input3.1 : Tensor = aten::add(%x_s.1, %dx.1, %685) # Face_Enhancement/models/networks/architecture.py:55:0 %input2.5 : Tensor = aten::leaky_relu(%input3.1, %700) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1378:0 %input0.1 : Tensor = aten::_convolution(%input2.5, %self.conv_img.weight, %self.conv_img.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 %433 : Tensor = aten::tanh(%input0.1) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:1699:0 return (%433) (CompileGraph) INFO: [TRTorch Conversion Context] - [MemUsageChange] Init CUDA: CPU +524, GPU +0, now: CPU 3605, GPU 3002 (MiB) DEBUG: [TRTorch] - Settings requested for TensorRT engine: Enabled Precisions: Float16 TF32 Floating Point Computation Enabled: 1 Truncate Long and Double: 1 Make Refittable Engine: 0 Debuggable Engine: 1 Strict Types: 0 GPU ID: 0 Allow GPU Fallback (if running on DLA): 0 Min Timing Iterations: 2 Avg Timing Iterations: 1 Max Workspace Size: 1048576 Max Batch Size: Not set Device Type: GPU GPU ID: 0 Engine Capability: standard Calibrator Created: 0 INFO: [TRTorch Conversion Context] - Converting Block DEBUG: [TRTorch Conversion Context] - INFO: [TRTorch Conversion Context] - Adding Input input.2 (named: input_0): Input(shape: [1, 18, -1, -1], min: [1, 18, 64, 64], opt: [1, 18, 512, 512], max: [1, 18, 512, 512], dtype: Float16, format: NCHW\Contiguous\Linear) in engine (conversion.AddInputs) INFO: [TRTorch Conversion Context] - Adding Input input0.2 (named: input_1): Input(shape: [1, 3, -1, -1], min: [1, 3, 64, 64], opt: [1, 3, 512, 512], max: [1, 3, 512, 512], dtype: Float16, format: NCHW\Contiguous\Linear) in engine (conversion.AddInputs) DEBUG: [TRTorch Conversion Context] - Evaluating %678 : int[] = prim::Constant[value=[16, 16]]() DEBUG: [TRTorch Conversion Context] - Found the value to be: [16, 16] DEBUG: [TRTorch Conversion Context] - Evaluating %679 : bool = prim::Constant[value=0]() DEBUG: [TRTorch Conversion Context] - Found the value to be: False DEBUG: [TRTorch Conversion Context] - Evaluating %680 : NoneType = prim::Constant() DEBUG: [TRTorch Conversion Context] - Found the value to be: None DEBUG: [TRTorch Conversion Context] - Evaluating %self.fc.weight : Float(1024, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.fc.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %683 : int[] = prim::Constant[value=[1, 1]]() DEBUG: [TRTorch Conversion Context] - Found the value to be: [1, 1] DEBUG: [TRTorch Conversion Context] - Evaluating %684 : int[] = prim::Constant[value=[0, 0]]() DEBUG: [TRTorch Conversion Context] - Found the value to be: [0, 0] DEBUG: [TRTorch Conversion Context] - Evaluating %685 : int = prim::Constant[value=1]() DEBUG: [TRTorch Conversion Context] - Found the value to be: 1 DEBUG: [TRTorch Conversion Context] - Evaluating %686 : bool = prim::Constant[value=1]() DEBUG: [TRTorch Conversion Context] - Found the value to be: True DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_0.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_0.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %689 : float = prim::Constant[value=0.10000000000000001]() DEBUG: [TRTorch Conversion Context] - Found the value to be: 0.10000000000000001 DEBUG: [TRTorch Conversion Context] - Evaluating %690 : float = prim::Constant[value=1.0000000000000001e-05]() DEBUG: [TRTorch Conversion Context] - Found the value to be: 1.0000000000000001e-05 DEBUG: [TRTorch Conversion Context] - Evaluating %691 : int = prim::Constant[value=2]() DEBUG: [TRTorch Conversion Context] - Found the value to be: 2 DEBUG: [TRTorch Conversion Context] - Evaluating %692 : int = prim::Constant[value=3]() DEBUG: [TRTorch Conversion Context] - Found the value to be: 3 DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %699 : Tensor = prim::Constant[value={1}]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape []) DEBUG: [TRTorch Conversion Context] - Evaluating %700 : float = prim::Constant[value=0.20000000000000001]() DEBUG: [TRTorch Conversion Context] - Found the value to be: 0.20000000000000001 DEBUG: [TRTorch Conversion Context] - Evaluating %701 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 1024, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.conv_0.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_1.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_1.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_1.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_1.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_1.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.norm_1.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %711 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 1024, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.head_0.conv_1.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %713 : float[] = prim::Constant[value=[2., 2.]]() DEBUG: [TRTorch Conversion Context] - Found the value to be: [2., 2.] DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_0.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_0.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %722 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 1024, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.conv_0.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_1.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_1.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_1.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_1.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_1.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.norm_1.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %732 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 1024, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_0.conv_1.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_0.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_0.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %742 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 1024, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.conv_0.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_1.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_1.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_1.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_1.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_1.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.norm_1.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %752 : Float(1024, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 1024, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.G_middle_1.conv_1.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_s.param_free_norm.running_mean : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_s.param_free_norm.running_var : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_s.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_s.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_s.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_s.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %762 : Float(512, 1024, 1, 1, strides=[1024, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512, 1024, 1, 1]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_0.mlp_gamma.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_0.mlp_gamma.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_0.mlp_beta.weight : Float(1024, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_0.mlp_beta.bias : Float(1024, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [1024]) DEBUG: [TRTorch Conversion Context] - Evaluating %769 : Float(512, 1024, 3, 3, strides=[9216, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512, 1024, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.conv_0.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_1.param_free_norm.running_mean : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_1.param_free_norm.running_var : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_1.mlp_gamma.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_1.mlp_gamma.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_1.mlp_beta.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.norm_1.mlp_beta.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512]) DEBUG: [TRTorch Conversion Context] - Evaluating %779 : Float(512, 512, 3, 3, strides=[4608, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512, 512, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_0.conv_1.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_s.param_free_norm.running_mean : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_s.param_free_norm.running_var : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_s.mlp_gamma.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_s.mlp_gamma.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_s.mlp_beta.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_s.mlp_beta.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512]) DEBUG: [TRTorch Conversion Context] - Evaluating %789 : Float(256, 512, 1, 1, strides=[512, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256, 512, 1, 1]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_0.mlp_gamma.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_0.mlp_gamma.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_0.mlp_beta.weight : Float(512, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_0.mlp_beta.bias : Float(512, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [512]) DEBUG: [TRTorch Conversion Context] - Evaluating %796 : Float(256, 512, 3, 3, strides=[4608, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256, 512, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.conv_0.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_1.param_free_norm.running_mean : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_1.param_free_norm.running_var : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_1.mlp_gamma.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_1.mlp_gamma.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_1.mlp_beta.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.norm_1.mlp_beta.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256]) DEBUG: [TRTorch Conversion Context] - Evaluating %806 : Float(256, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256, 256, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_1.conv_1.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_s.param_free_norm.running_mean : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_s.param_free_norm.running_var : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_s.mlp_gamma.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_s.mlp_gamma.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_s.mlp_beta.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_s.mlp_beta.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256]) DEBUG: [TRTorch Conversion Context] - Evaluating %816 : Float(128, 256, 1, 1, strides=[256, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 256, 1, 1]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_0.mlp_gamma.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_0.mlp_gamma.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_0.mlp_beta.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_0.mlp_beta.bias : Float(256, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [256]) DEBUG: [TRTorch Conversion Context] - Evaluating %823 : Float(128, 256, 3, 3, strides=[2304, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 256, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.conv_0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_1.param_free_norm.running_mean : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_1.param_free_norm.running_var : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_1.mlp_gamma.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_1.mlp_gamma.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_1.mlp_beta.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.norm_1.mlp_beta.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %833 : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_2.conv_1.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_s.param_free_norm.running_mean : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_s.param_free_norm.running_var : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_s.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_s.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_s.mlp_gamma.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_s.mlp_gamma.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_s.mlp_beta.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_s.mlp_beta.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %843 : Float(64, 128, 1, 1, strides=[128, 1, 1, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [64, 128, 1, 1]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_0.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_0.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_0.mlp_gamma.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_0.mlp_gamma.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_0.mlp_beta.weight : Float(128, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_0.mlp_beta.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %850 : Float(64, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [64, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.conv_0.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [64]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_1.param_free_norm.running_mean : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [64]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_1.param_free_norm.running_var : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [64]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_1.mlp_shared.0.weight : Float(128, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128, 3, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_1.mlp_shared.0.bias : Float(128, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [128]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_1.mlp_gamma.weight : Float(64, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [64, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_1.mlp_gamma.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [64]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_1.mlp_beta.weight : Float(64, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [64, 128, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.norm_1.mlp_beta.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [64]) DEBUG: [TRTorch Conversion Context] - Evaluating %860 : Float(64, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [64, 64, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.up_3.conv_1.bias : Float(64, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [64]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.conv_img.weight : Float(3, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=0, device=cuda:0) = prim::Constant[value=]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [3, 64, 3, 3]) DEBUG: [TRTorch Conversion Context] - Evaluating %self.conv_img.bias : Float(3, strides=[1], requires_grad=0, device=cuda:0) = prim::Constant[value= 0.1178 -0.0640 -0.1034 [ CUDAFloatType{3} ]]() DEBUG: [TRTorch Conversion Context] - Found the value to be a tensor (shape [3]) INFO: [TRTorch Conversion Context] - Adding Layer %input1.3 : Tensor = aten::upsample_bilinear2d(%input0.2, %678, %679, %680, %680) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:3554:0 (ctx.AddLayer) DEBUG: [TRTorch Conversion Context] - Node input is an already converted tensor DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch] - Weights: [4] Data Type: Int32 Number of input maps: 4 Number of output maps: 4 Element shape: [1] DEBUG: [TRTorch Conversion Context] - Freezing tensor 0x5591bce723c8 as an IConstantLayer DEBUG: [TRTorch] - Weights: [4] Data Type: Int32 Number of input maps: 4 Number of output maps: 4 Element shape: [1] DEBUG: [TRTorch Conversion Context] - Freezing tensor 0x5591bce724a8 as an IConstantLayer DEBUG: [TRTorch] - Output tensor shape: [1, 3, 16, 16] INFO: [TRTorch Conversion Context] - Adding Layer %input0.5 : Tensor = aten::_convolution(%input1.3, %self.fc.weight, %self.fc.bias, %683, %683, %683, %679, %684, %685, %679, %679, %686, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/modules/conv.py:395:0 (ctx.AddLayer) DEBUG: [TRTorch Conversion Context] - Node input is an already converted tensor DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch] - Weights: [1024] Data Type: Float32 Number of input maps: 1024 Number of output maps: 1024 Element shape: [1] DEBUG: [TRTorch] - Weights: [1024, 3, 3, 3] Data Type: Float32 Number of input maps: 3 Number of output maps: 1024 Element shape: [3,3] DEBUG: [TRTorch] - Input dims: [1, 3, 16, 16] DEBUG: [TRTorch] - Weights: Weights: [1024, 3, 3, 3] Data Type: Float32 Number of input maps: 3 Number of output maps: 1024 Element shape: [3,3] DEBUG: [TRTorch] - stride: [1, 1] DEBUG: [TRTorch] - padding: [1, 1] DEBUG: [TRTorch] - dilation: [1, 1] DEBUG: [TRTorch] - out_padding: [0, 0] DEBUG: [TRTorch] - groups: 1 DEBUG: [TRTorch] - Output tensor shape: [1, 1024, 16, 16] INFO: [TRTorch Conversion Context] - Adding Layer %normalized.3 : Tensor = aten::batch_norm(%input0.5, %680, %680, %self.head_0.norm_0.param_free_norm.running_mean, %self.head_0.norm_0.param_free_norm.running_var, %679, %689, %690, %686) # anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:2149:0 (ctx.AddLayer) DEBUG: [TRTorch Conversion Context] - Node input is an already converted tensor DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value Traceback (most recent call last): File "test_face.py", line 36, in generated = model(data_i, mode="inference") File "anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "Face_Enhancement/models/pix2pix_model.py", line 50, in forward fake_image, _ = self.generate_fake(input_semantics, degraded_image, real_image) File "Face_Enhancement/models/pix2pix_model.py", line 219, in generate_fake trt_ts_module = trtorch.compile(model, compile_settings) File "anaconda3/lib/python3.8/site-packages/trtorch/_compiler.py", line 81, in compile compiled_cpp_mod = trtorch._C.compile_graph(module._c, _parse_compile_spec(compile_spec)) RuntimeError: [Error thrown at ./core/conversion/var/Var_inl.h:37] Expected ivalue->isTensor() to be true but got false Requested unwrapping of arg IValue assuming it was N2at6TensorE however type is NoneType
narendasan commented 3 years ago

I see, are you using dynamic shape? It might be hitting this with the None. https://github.com/NVIDIA/TRTorch/blob/3789f0fc7ed5c6d04093ffe7f4426a9261d16347/core/conversion/converters/impl/batch_norm.cpp#L58. We probably need to have defaults that make sense for dynamic shape there

github-actions[bot] commented 2 years ago

This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days