NVIDIA-AI-IOT / torch2trt

An easy to use PyTorch to TensorRT converter
MIT License
4.58k stars 675 forks source link

AttributeError: 'dict' object has no attribute '_trt' #452

Open austinmw opened 3 years ago

austinmw commented 3 years ago

Hi, I'm using the the FairMOT tracking model with just a ResNet-18 backbone. This model has multiple heads and keeps track of outputs using dict dtype. Is it possible for this to be supported?

austinmw commented 3 years ago

Looks like this works by converting output from dict to namedtuple. Is it possible to get dict output support?

jaybdub commented 3 years ago

Hi @austinmw,

Thanks for pointing this out!

This seems like a reasonable feature to add, but I can't provide any guarantee or timeline. I explored this feature previously, but did not implement because torch's own JIT tracer did not support dict outputs, so I figured it may not be necessary.

Is this essential for your use-case, or does the namedtuple workaround work alright for now?

Best, John

austinmw commented 3 years ago

Thanks for your response. I think I can get away with namedtuples for now, but dict support would be great in the future since the data structure sees wider usage.

YoushaaMurhij commented 3 years ago

I think I have the same issue! I had a dictionary of heads like this : {'reg': (2, 2), 'height': (1, 2), 'dim': (3, 2), 'rot': (2, 2), 'vel': (2, 2), 'hm': (2, 2)}

Could you please suggest how to avoid this error?

austinmw commented 3 years ago

Change your dict output to a namedtuple:

from collections import namedtuple
NT = namedtuple('output', ['reg', 'height', 'dim', 'rot', 'vel', 'hm'])
d = NT((2, 2), (1, 2), (3, 2), (2, 2), (2, 2), (2, 2))
austinmw commented 3 years ago

Something like this:

from collections import namedtuple

NT = namedtuple('output', self.heads)

def forward(self, x):
    ret_nt = NT(*[self.__getattr__(head)(x) for head in self.heads])
    return ret_nt
YoushaaMurhij commented 3 years ago

Thank you again but, it seems that the problem still exists:

File "/usr/local/lib/python3.6/dist-packages/torch2trt-0.1.0-py3.6.egg/torch2trt/torch2trt.py", line 535, in torch2trt
    ctx.mark_outputs(outputs, output_names)
  File "/usr/local/lib/python3.6/dist-packages/torch2trt-0.1.0-py3.6.egg/torch2trt/torch2trt.py", line 401, in mark_outputs
    trt_tensor = torch_output._trt
AttributeError: 'output' object has no attribute '_trt'
KiedaTamashi commented 3 years ago

@austinmw Hi austinmw, I am applying torch2trt on FairMOT tracking model and met the same problem as well. Could I ask how you solve the error report"RuntimeError: expected scalar type Float but found Half" from dcn_v2.py? Now I just transfer the data dtype to skip it...

@YoushaaMurhij Hi YoushaaMurhij, have you solved your problem... I got the same problem.

JWLee89 commented 3 years ago

In case anyone comes across this thread and just want the conclusion and a simple explanation.

In short, if your model outputs multiple heads using a dictionary, you will have trouble compiling your model into a TensorRT engine.

To overcome this, use a static data structure such as a Tuple (NamedTuple also works).

snlpatel001213 commented 2 years ago

@JWLee89 Can you explain the resolution in detail?