NVIDIA-AI-IOT / torch2trt

An easy to use PyTorch to TensorRT converter
MIT License
4.55k stars 671 forks source link

torch2trt will impact the normal pytorch? #262

Open dancingpipi opened 4 years ago

dancingpipi commented 4 years ago

in pytorch, I can do the below code right:

a = torch.ones((4, 3, 13, 13), dtype=torch.uint8)
b = torch.rand((4, 3, 13, 13))

c = b[a]

but if I use torch2trt to convert model, then execute the code. error whill occur: IndexError: too many indices for tensor of dimension 4

Will any one reproduce this error.

Wait for your help~

jaybdub commented 4 years ago

Hi z13974509906,

Thanks for reaching out! I would like to try and reproduce this issue.

Do you mind sharing the model that you were attempting to convert?

Best, John

dancingpipi commented 4 years ago

Hi z13974509906,

Thanks for reaching out! I would like to try and reproduce this issue.

Do you mind sharing the model that you were attempting to convert?

Best, John

I have follow pytorch yolov3 . I use torch2trt convert the backbone. After convert, pytorch became abnormal as I mention: b[a] will cause an indexerror, so I have to change it to b[a.tolist()].

If I don't convert the backbone by torch2trt, b[a] work properly

dancingpipi commented 4 years ago

Hi z13974509906,

Thanks for reaching out! I would like to try and reproduce this issue.

Do you mind sharing the model that you were attempting to convert?

Best, John

you can also test torchvison.models.resnet.resnet50(), it can reproduce this issue

dancingpipi commented 4 years ago

any update?

QZ-cmd commented 4 years ago

IndexError: too many indices for tensor of dimension 2 Hello i also found the same problem, may i have any solution

dancingpipi commented 4 years ago

IndexError: too many indices for tensor of dimension 2 Hello i also found the same problem, may i have any solution

Uss tensor.tolist() may solve it. But it won't work everywhere

QZ-cmd commented 4 years ago

Hello where is it placed?for example x=randn(1,3,244,244).cuda().tolist(), Can you tell me exactly where to change? thanks

dancingpipi commented 4 years ago

Hello where is it placed?for example x=randn(1,3,244,244).cuda().tolist(), Can you tell me exactly where to change? thanks

no, I found it only impact index. for example : b[a] should change to b[a.tolist()]

QZ-cmd commented 4 years ago

Hello, do you have an email? I have some questions for you. Thank you

zhengjiawen commented 4 years ago

I have the same problem, add tolist() can solve it, But I don't know why?

QZ-cmd commented 4 years ago

the same problem, add tolist() can solve it, But I don't kno

Can you tell me where to add tolist (), I didn't solve the problem,thanks!

zhengjiawen commented 4 years ago

the same problem, add tolist() can solve it, But I don't kno

Can you tell me where to add tolist (), I didn't solve the problem,thanks!

add tolist() in your index tensor array

dancingpipi commented 4 years ago

the same problem, add tolist() can solve it, But I don't kno

Can you tell me where to add tolist (), I didn't solve the problem,thanks!

I may be powerless. The only way to solve the problem is debugging step by step.

I recommend pdb or code

chebbyChefNEQ commented 4 years ago

I think this is the same as https://github.com/NVIDIA-AI-IOT/torch2trt/issues/270

PressEtoRace commented 4 years ago

I have the same problem. I debug the code on Nvidia jetson AGX, after debugging, I think torch2trt will really impact the original pytorch. Can anyone solve it? Or tell me the reason for the problem. Thanks!

baheytharwat commented 4 years ago

@dancingpipi @PressEtoRace Did you succeed in optimizing inference time of YOLOv3? if yes, Do you detect all objects well with a good accuracy? because the model I am working on can detect only one object!

PressEtoRace commented 4 years ago

@dancingpipi @PressEtoRace Did you succeed in optimizing inference time of YOLOv3? if yes, Do you detect all objects well with a good accuracy? because the model I am working on can detect only one object! I've had problems like yours. I think this is also caused by the influence of torch2trt on pytorch. My solution is to modify the NMS code in yolov3. You can print the size of the data after each operation in NMS, and you should find the problem.

twmht commented 4 years ago

same problem

twmht commented 4 years ago

I have also found out when running mask rcnn with mmdetection, it would throw cuda error at torch.tensor.any()

RuntimeError: CUDA error: an illegal memory access was encountered
terminate called after throwing an instance of 'c10::Error'
  what():  CUDA error: an illegal memory access was encountered
Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f24d0f949ab in /home/acer/nfs-share/pytorch/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7f24d11d7280 in /home/acer/nfs-share/pytorch/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f24d0f7ce0d in /home/acer/nfs-share/pytorch/torch/lib/libc10.so)
frame #3: <unknown function> + 0x559d82 (0x7f24ed5eed82 in /home/acer/nfs-share/pytorch/torch/lib/libtorch_python.so)
<omitting python frames>
frame #19: __libc_start_main + 0xe7 (0x7f2538809b97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #20: _start + 0x2a (0x56250f80577a in /home/acer/.pyenv/versions/pytorch_build/bin/python)

but after commenting out the converter of getitem (https://github.com/NVIDIA-AI-IOT/torch2trt/blob/master/torch2trt/converters/getitem.py#L32), it would work fine.

I think overriding torch.getitem is dangerous, since other torch api would use that. why not convert torch.narrow (https://pytorch.org/docs/stable/tensors.html)

1309123499 commented 2 years ago

I have also found out when running mask rcnn with mmdetection, it would throw cuda error at torch.tensor.any()

RuntimeError: CUDA error: an illegal memory access was encountered
terminate called after throwing an instance of 'c10::Error'
  what():  CUDA error: an illegal memory access was encountered
Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6b (0x7f24d0f949ab in /home/acer/nfs-share/pytorch/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xc10 (0x7f24d11d7280 in /home/acer/nfs-share/pytorch/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f24d0f7ce0d in /home/acer/nfs-share/pytorch/torch/lib/libc10.so)
frame #3: <unknown function> + 0x559d82 (0x7f24ed5eed82 in /home/acer/nfs-share/pytorch/torch/lib/libtorch_python.so)
<omitting python frames>
frame #19: __libc_start_main + 0xe7 (0x7f2538809b97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #20: _start + 0x2a (0x56250f80577a in /home/acer/.pyenv/versions/pytorch_build/bin/python)

but after commenting out the converter of getitem (https://github.com/NVIDIA-AI-IOT/torch2trt/blob/master/torch2trt/converters/getitem.py#L32), it would work fine.

I think overriding torch.getitem is dangerous, since other torch api would use that. why not convert torch.narrow (https://pytorch.org/docs/stable/tensors.html)

would you please show how to solve this problem more clearly? thanks!

SrivastavaKshitij commented 2 years ago

Tried this example with PR #691 and there is no error.