Closed p1x31 closed 2 years ago
Can you turn on Debug logging and share the log (or at least the parts around the error)?
Does it expect one more argument of NoneType? If so how can I specify NoneType in settings and compile with it?
No this error occurs when an argument to a converter is expected to be an at::Tensor but instead None was provided. There shouldnt be anything you need to do, its a bug in which ever converter is handling this incorrectly. The debug log will tell us which one is the issue
Here is the debug log, just to provide you with a little bit more information the net is a Variational autoencoder It computes the sampling operation of z by reparameterization of mu and logvar to make the network differentiable during training. However, during inference, z is assigned to None Stock Pytorch inference call
def generate_fake(self, input_semantics, degraded_image, real_image, compute_kld_loss=False):
z = None
fake_image = self.netG(input_semantics, degraded_image, z=z)
It was traced with only two arguments
input = torch.rand(1, 18, 512, 512).to("cuda"), torch.rand(1, 3, 512, 512).to("cuda")
Torchscript inference call:
fake_image = self.netG(input_semantics, degraded_image)
Returns true:
print(trtorch.check_method_op_support(model, 'forward'))
Maybe this is the issue DEBUG: [TRTorch Conversion Context] - Evaluating %680 : NoneType = prim::Constant()
Bug Description
I'm trying to compile but getting this error:
To Reproduce
Steps to reproduce the behavior: graph:
Compile settings:
Expected behaviour
Compile like torchscript
Environment
conda
,pip
,libtorch
, source): condaAdditional context
Does it expect one more argument of NoneType? If so how can I specify NoneType in settings and compile with it?