pytorch / TensorRT

PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
https://pytorch.org/TensorRT
BSD 3-Clause "New" or "Revised" License
2.43k stars 337 forks source link

🐛 [Bug] Expected input tensors to have type Half, found type float #2113

Open thesword53 opened 1 year ago

thesword53 commented 1 year ago

Bug Description

TensorRT throws error about fp32 tensors input despite I am using fp16 tensors as input.

I attached the file IFRNet.py adapted from https://github.com/ltkong218/IFRNet/blob/main/models/IFRNet.py

To Reproduce

Steps to reproduce the behavior:

  1. Compile model with fp16 inputs and fp16 dtype
  2. Infer model with fp16 tensors

Expected behavior

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

Additional context

WARNING: [Torch-TensorRT] - For input embt.1, found user specified input dtype as Half. The compiler is going to use the user setting Half
WARNING: [Torch-TensorRT] - Mean converter disregards dtype
WARNING: [Torch-TensorRT] - Mean converter disregards dtype
WARNING: [Torch-TensorRT] - Mean converter disregards dtype
WARNING: [Torch-TensorRT] - Trying to record the value 162 with the ITensor (Unnamed Layer* 79) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 185 with the ITensor (Unnamed Layer* 101) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Trying to record the value 43 with the ITensor (Unnamed Layer* 17) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 67 with the ITensor (Unnamed Layer* 39) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Trying to record the value 43 with the ITensor (Unnamed Layer* 17) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 67 with the ITensor (Unnamed Layer* 39) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Trying to record the value 43 with the ITensor (Unnamed Layer* 17) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT] - Trying to record the value 67 with the ITensor (Unnamed Layer* 39) [Parametric ReLU]_output again.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT TorchScript Conversion Context] - Unused Input: input_2
WARNING: [Torch-TensorRT TorchScript Conversion Context] - [RemoveDeadLayers] Input Tensor input_2 is unused or used only at compile-time, but is not being removed.
WARNING: [Torch-TensorRT] - Input 0 of engine __torch___wrappers_ifrnet_models_IFRNet_Model_trt_engine_0x5604f02a32e0 was found to be on cpu but should be on cuda:0. This tensor is being moved by the runtime but for performance considerations, ensure your inputs are all on GPU and open an issue here (https://github.com/pytorch/TensorRT/issues) if this warning persists.
WARNING: [Torch-TensorRT] - Input 1 of engine __torch___wrappers_ifrnet_models_IFRNet_Model_trt_engine_0x5604f02a32e0 was found to be on cpu but should be on cuda:0. This tensor is being moved by the runtime but for performance considerations, ensure your inputs are all on GPU and open an issue here (https://github.com/pytorch/TensorRT/issues) if this warning persists.
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: [Error thrown at /usr/src/debug/python-pytorch-tensorrt/TensorRT/core/runtime/execute_engine.cpp:136] Expected inputs[i].dtype() == expected_type to be true but got false
Expected input tensors to have type Half, found type float

IFRNet.py.gz

narendasan commented 1 year ago

I dont see the torch-tensorrt code in the link you shared.

@bowang007 Keep an eye on this, might be related to some of your PRs

lei-rs commented 12 months ago

I'm also having this issue

thesword53 commented 12 months ago

I also noticed a simple sum between 2 fp16 tensors implicitly cast them to a fp32 tensor.

JXQI commented 10 months ago

I'm also having this issue, how to slove it?

janblumenkamp commented 8 months ago

I am encountering the same issue.

bowang007 commented 7 months ago

This PR can help resolve above issue. Thanks!

Eliza-and-black commented 7 months ago

This PR can help resolve above issue. Thanks!

@bowang007 Is there any update for your commit? It seems fail in a few check. Eagerly looking forward to your update.

johnzlli commented 4 months ago

also having this issue!

johnzlli commented 4 months ago

This PR can help resolve above issue. Thanks!

image There is a new error with this PR. Is there any update?

bowang007 commented 4 months ago

Hi @johnzlli , can you try using dynamo path instead? We are now supporting Dynamo since Torchscript path is being deprecated. Thanks!

johnzlli commented 4 months ago

Hi @johnzlli , can you try using dynamo path instead? We are now supporting Dynamo since Torchscript path is being deprecated. Thanks!

Thanks for your reply! Dynamo is a great work, but there is no way to export the compiled model. So that we have to still use torchscript.