Open gavrin-s opened 5 years ago
I'm also getting this error when I meet AvgPool2d trying to convert the ESPNetV2 model here: https://github.com/sacmehta/EdgeNets
Hi All,
This is most likely because there Is a layer converter that has not been implemented. It is likely the layer before the one that threw the error (because the _trt was not set by the previous converter).
We’re focused on supporting the models in the README, but the coverage may increase over time.
@gavrin-s are you able to share the model you’re attempting to convert?
FYI, You can see the list of registered converters by typing
‘’’python import torch2trt
print(torch2trt.CONVERTERS) ‘’’
It may be possible to add support for the unsupported layers by using the ‘’@tensorrt_converter’’ method described in the README.md.
Please let me know if this helps or you have any other questions.
Best, John
HI @jaybdub ,
I'm trying to reproduce image segmentation example of this tutorial (Deeplabv3 model) and I have the following issue as in this topic.
Click to see the log
HI @jaybdub ,
I'm trying to reproduce image segmentation example of this tutorial (Deeplabv3 model) and I have the following issue as in this topic.
Click to see the log
Any ideas why it does not work ?
Packages Versions
I have a similar error when working to replicate the image classification example
Traceback (most recent call last): File "conversionog.py", line 14, in <module> model_trt = torch2trt(model, [data]) File "Desktop/torch2trt/torch2trt/torch2trt.py", line 252, in torch2trt outputs = module(*inputs) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 539, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.5/dist-packages/torchvision-0.3.0a0+6a834e9-py3.5-linux-x86_64.egg/torchvision/models/resnet.py", line 208, in forward File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 539, in __call__ result = self.forward(*input, **kwargs) File "Desktop/torch2trt/torch2trt/torch2trt.py", line 97, in wrapper converter(ctx) File "Desktop/torch2trt/torch2trt/converters/Linear.py", line 12, in convert_Linear layer = ctx.network.add_shuffle(input._trt) AttributeError: 'Tensor' object has no attribute '_trt'
HI @jaybdub , I'm trying to reproduce image segmentation example of this tutorial (Deeplabv3 model) and I have the following issue as in this topic. Click to see the log Any ideas why it does not work ? Packages Versions
I have a similar error when working to replicate the image classification example
Traceback (most recent call last): File "conversionog.py", line 14, in <module> model_trt = torch2trt(model, [data]) File "Desktop/torch2trt/torch2trt/torch2trt.py", line 252, in torch2trt outputs = module(*inputs) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 539, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.5/dist-packages/torchvision-0.3.0a0+6a834e9-py3.5-linux-x86_64.egg/torchvision/models/resnet.py", line 208, in forward File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 539, in __call__ result = self.forward(*input, **kwargs) File "Desktop/torch2trt/torch2trt/torch2trt.py", line 97, in wrapper converter(ctx) File "Desktop/torch2trt/torch2trt/converters/Linear.py", line 12, in convert_Linear layer = ctx.network.add_shuffle(input._trt) AttributeError: 'Tensor' object has no attribute '_trt'
Tried running commands in https://github.com/NVIDIA-AI-IOT/torch2trt#convert and encountered similar errors too.
Hi All,
This is most likely because there Is a layer converter that has not been implemented. It is likely the layer before the one that threw the error (because the _trt was not set by the previous converter).
We’re focused on supporting the models in the README, but the coverage may increase over time.
@gavrin-s are you able to share the model you’re attempting to convert?
FYI, You can see the list of registered converters by typing
‘’’python import torch2trt
print(torch2trt.CONVERTERS) ‘’’
It may be possible to add support for the unsupported layers by using the ‘’@tensorrt_converter’’ method described in the README.md.
Please let me know if this helps or you have any other questions.
Best, John
Hi, @jaybdub
I can't find the Upsample
and LeakyReLU
at torch2trt.CONVERTERS
, is it still not supported yet?
As you said, it can be solved by using the @tensorrt_converter
, but I don't know how to start, could you show a demo, thanks. BTW, I installed torch2trt without plugins.
Hi kawa23,
Thanks for reaching out.
We have implemented the interpolate
plugin, which can accomplish Upsampling. You will need to install torch2trt with plugins as described in the README.
For LeakyReLU, the following converter should work
import numpy as np
import tensorrt
import torch2trt
logger = trt.Logger(trt.Logger.INFO)
trt.init_libnvinfer_plugins(logger, '')
@torch2trt.tensorrt_converter('torch.nn.functional.leaky_relu')
def convert_leaky_relu(ctx):
input = ctx.method_args[0]
output = ctx.method_return
if len(ctx.method_args) > 1:
negative_slope = ctx.method_args[1]
elif 'negative_slope' in ctx.method_kwargs:
negative_slope = ctx.method_kwargs['negative_slope']
registry = trt.get_plugin_registry()
creator = [c for c in registry.plugin_creator_list if c.name == 'LReLU_TRT'][0]
lrelu_slope_field = trt.PluginField("neg_slope", np.array([negative_slope], dtype=np.float32), trt.PluginFieldType.FLOAT32)
field_collection = trt.PluginFieldCollection([lrelu_slope_field])
plugin = creator.create_plugin(name='LReLU_TRT', field_collection=field_collection)
layer = ctx.network.add_plugin_v2(inputs=[input._trt], plugin=plugin)
output._trt = layer.get_output(0)
After executing the above code, you should be able to convert the model as described in the README.
Please let me know if this works for you or if you run into any further issues. I'd be happy to take a look at your model if you continue to run into problems.
Best, John
OK, after reading the code, I finally figure out what happens under the hood.
There is a ConversionContext
, when enter it, some pytorch's methods are replaced with wrapped methods, the wrapped methods do:
problem happens when you encounter some unhooked method, they will silently fail the latter conversion, some part is unconnected. Main issue is not the failure in @gavrin-s 's case, but is, you don't know what missing hook trigger the failure, only looking into the original pytorch code will give you the insight...
@cloudhan Your understanding is correct. Admittedly this makes it a bit harder to debug a missing converter, and currently in some cases reading the code may be necessary.
One tip that may help, is to check the grad_fn
of the tensor which is missing the _trt
attribute.
This is set for any non-leaf tensor requiring gradient. I believe you can check this by
Attempt conversion (should throw error)
model_trt = torch2trt(model, [data])
Launch debugger post-mortem
import pdb
pdb.pm()
Print grad_fn of tensor without _trt
attribute.
p input.grad_fn
Manually search for torch method which corresponds to printed backward method
Perhaps we could log this automatically upon missing _trt
error, to provide an extra hint when conversion fails.
Please let me know if this helps or you have any other questions or feedback.
Best, John
HI @jaybdub , I'm trying to reproduce image segmentation example of this tutorial (Deeplabv3 model) and I have the following issue as in this topic. Click to see the log Any ideas why it does not work ? Packages Versions
I have a similar error when working to replicate the image classification example
Traceback (most recent call last): File "conversionog.py", line 14, in <module> model_trt = torch2trt(model, [data]) File "Desktop/torch2trt/torch2trt/torch2trt.py", line 252, in torch2trt outputs = module(*inputs) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 539, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.5/dist-packages/torchvision-0.3.0a0+6a834e9-py3.5-linux-x86_64.egg/torchvision/models/resnet.py", line 208, in forward File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 539, in __call__ result = self.forward(*input, **kwargs) File "Desktop/torch2trt/torch2trt/torch2trt.py", line 97, in wrapper converter(ctx) File "Desktop/torch2trt/torch2trt/converters/Linear.py", line 12, in convert_Linear layer = ctx.network.add_shuffle(input._trt) AttributeError: 'Tensor' object has no attribute '_trt'
I have the same problem, I open a issue in 124
AttributeError: 'Tensor' object has no attribute '_trt' Even using the Convert script provided in README.md for alexnet
@jaybdub
does torch2trt support nn.Parameter
when I covert efficientnet in https://github.com/shariqfarooq123/AdaBins/blob/2fb686a66a304f0a719bc53d77412460af97fd61/models/layers.py#L19 positional_encodings1 = self.positional_encodings[:embeddings1_shape, :]#.T.unsqueeze(0) File "/home/delight-gpu/project/torch2trt/torch2trt/torch2trt.py", line 300, in wrapper converter"converter" File "/home/delight-gpu/project/torch2trt/torch2trt/converters/getitem.py", line 30, in convert_tensor_getitem input_trt = input._trt AttributeError: 'Parameter' object has no attribute '_trt'
the self.positional_encodings = nn.Parameter(torch.rand(500, embedding_dim), requires_grad=True)
I get error
AttributeError: 'Tensor' object has no attribute '_trt'
when I meet MaxPool2d/usr/local/lib/python3.6/dist-packages/torch2trt-0.0.0-py3.6.egg/torch2trt/torch2trt.py in wrapper(*args, **kwargs) 95 96 #print('%s : %s' % (method.qualname, converter.name)) ---> 97 converter(ctx) 98 99 # convert to None so conversion will fail for unsupported layers
/usr/local/lib/python3.6/dist-packages/torch2trt-0.0.0-py3.6.egg/torch2trt/converters/MaxPool2d.py in convert_MaxPool2d(ctx) 21 22 layer = ctx.network.add_pooling( ---> 23 input=input._trt, type=trt.PoolingType.MAX, window_size=kernel_size) 24 layer.stride = stride 25 layer.padding = padding
AttributeError: 'Tensor' object has no attribute '_trt'