izmttk / TensorRT-Runtime

a python wrapper for TensorRT API
2 stars 0 forks source link

ValueError: Failed to parse the ONNX file. #2

Open CodeSnailss opened 3 weeks ago

CodeSnailss commented 3 weeks ago

作者大大,你好。十分感谢您提供的从ONNX转换到TensorRT的方案,我目前遇到了一些困难

当我传入build_engine()的ONNX是batch_size为16的形式,它将会报出下面的错误

[10/30/2024-15:24:45] [TRT] [I] Found regisitered local function: stylesync_model_Upsample_generator_to_rgbs_6_upsample_1. Checking as a local function.
[10/30/2024-15:24:45] [TRT] [I] Found regisitered local function: aten_constant_pad_nd|inlined_40. Checking as a local function.
[10/30/2024-15:24:45] [TRT] [I] Found regisitered local function: aten_constant_pad_nd|inlined_41. Checking as a local function.
[10/30/2024-15:24:46] [TRT] [E] In node -1 with name:  and operator:  (convMultiInput): UNSUPPORTED_NODE: Assertion failed: checkSpatialDims(kernel_tensor_ptr->getDimensions()) && "The input tensor shape misaligns with the input kernel shape."
Traceback (most recent call last):
  File "test_convert_engine.py", line 26, in <module>
    feature_engine = build_engine(feature2image_path, precision='fp16', dynamic_shapes=dynamic_shapes)
  File "/autodl-fs/data/digital_human/SSCODE/convert_tensorrt.py", line 27, in build_engine
    raise ValueError('Failed to parse the ONNX file.')
ValueError: Failed to parse the ONNX file.

下面的是我的input和output的shapes

Model Input and Output Shapes:

Inputs:
  - Name: l_face_sequences_, Shape: [16, 6, 512, 512]
  - Name: l_audio_feat_, Shape: [16, 512]

Outputs:
  - Name: act_1, Shape: [16, 3, 512, 512]

这是我利用build_engine的代码

feature2image_path = "compile/f2i/video20241014_150313/feature2image.onnx"

batch_size = 16

dynamic_shapes = {
    'min_shape': [1, 3, 256, 256],
    'opt_shape': [max(1, batch_size // 2), 3, 512, 512],
    'max_shape': [batch_size, 3, 960, 960]
}

feature_engine = build_engine(feature2image_path, precision='fp16', dynamic_shapes=dynamic_shapes)
print("feature engine built successfully!")

如果您可以帮助解决我的困惑的话,我将不胜感激。(^U^)ノ~YO

izmttk commented 3 weeks ago

你好呀,初步看上去是输入形状配置的问题,但是很遗憾这两天我比较忙碌,两天后我再仔细审查这个问题,请耐心等一下👀

CodeSnailss commented 3 weeks ago

你好呀,初步看上去是输入形状配置的问题,但是很遗憾这两天我比较忙碌,两天后我再仔细审查这个问题,请耐心等一下👀

非常感谢您的回复!!!你的代码帮了我很大的忙,社区因为你们而伟大。Thanks♪(・ω・)ノ

转换的问题我暂时解决了,我根据报错寻找形状配置了很长很长时间,原来我为了正常导出ONNX用到了torch.onnx.dynamo_export,虽然正常导出了ONNX,但是出现了上面的错误。换回torch.onnx.export以后,通过把optset的版本从11改为10才正常的转换。

目前我已经成功的导出了engine文件了,只不过在推理TensorRT的方面(用的依然是该项目的代码,nice)遇到了新的问题(内存溢出?),我认为可能是batch_size(16)设置的过大导致的。

500it [00:04, 103.15it/s]
imgs_dataset[0].shape (1280, 720, 3)
frame_h, frame_w, _ 1280 720 3
Run face cropping...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 500/500 [00:03<00:00, 129.98it/s]
Run generation...
  0%|                                                                                                                                                                   | 0/32 [00:00<?, ?it/s][10/31/2024-19:05:28] [TRT] [E] IExecutionContext::setInputShape: Error Code 3: API Usage Error (Parameter check failed, condition: engineDims.d[i] == dims.d[i]. Static dimension mismatch while setting input shape for image. Set dimensions are [1,6,512,512]. Expected dimensions are [16,6,512,512].)
[10/31/2024-19:05:28] [TRT] [E] IExecutionContext::enqueueV3: Error Code 1: Cuda Runtime (an illegal memory access was encountered)
  0%|                                                                                                                                                                   | 0/32 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "inference_4.py", line 1114, in <module>
    main()
  File "inference_4.py", line 677, in main
    infer_one(model, full_frames_dataset, full_frames_loader, wav_path, save_path, lmk3_list, lmk203_list, affine_matrix_list, data_iter,
  File "inference_4.py", line 1003, in infer_one
    pred = model(img_batch_b[no], mel_batch)
  File "inference_4.py", line 246, in __call__
    f2i_trt_outputs = self.f2i_processor.infer(f2i_inputs)
  File "inference_4.py", line 195, in infer
    self.end_event.record(self.stream)
pycuda._driver.LogicError: cuEventRecord failed: an illegal memory access was encountered

我试着直接跑ONNXruntime的TensorrtExecutionProvider模式,同样出现了报错

# 检查模型
model = onnx.load(feature2image_path)
onnx.checker.check_model(model)
print("Model checked successfully!")
# 创建会话
try:
    f2i_ort_session = onnxruntime.InferenceSession(feature2image_path, providers=['TensorrtExecutionProvider'])
    print("Session created successfully!")
except Exception as e:
    print(f"Failed to create session: {e}")
2024-11-01 09:42:48.494552260 [E:onnxruntime:Default, tensorrt_execution_provider.h:84 log] [2024-11-01 01:42:48   ERROR] IBuilder::buildSerializedNetwork: Error Code 10: Internal Error (Could not find any implementation for node /to_rgb1/conv/Conv.)
Failed with mixed execution: [ONNXRuntimeError] : 1 : FAIL : TensorRT EP failed to create engine from network for fused node: TensorrtExecutionProvider_TRTKernel_graph_main_graph_9110311580091207751_0_0
izmttk commented 1 week ago

你好呀,很抱歉拖这么久,请问issue中的问题解决了吗,我在本地我无法复现这个问题,您能提供更多信息吗,比如版本号和出问题的简化网络模型

CodeSnailss commented 4 days ago

你好呀,很抱歉拖这么久,请问issue中的问题解决了吗,我在本地我无法复现这个问题,您能提供更多信息吗,比如版本号和出问题的简化网络模型

谢谢您的回复,没事的,这个问题我已经解决了。导致这个问题的原因是因为我没有把输入的numpy的形状调整好,将模型调整为16batch以后,输入的还是原来1批次的逻辑,导致了一些错误。 再次感谢你的回复和代码!!!

CodeSnailss commented 4 days ago

从大佬你的代码里面学到了很多👍

CodeSnailss commented 2 days ago

作者大大,你好,我有个问题想要请教一下ProcessorV3infer是否可以变得更快呢,我从结构上面把设置张量地址放到复制输入数据到设备以后可以合并一个循环(没啥作用,...),单从infer入手是否可以更快呢。下面是我现在的代码

def infer(self, inputs: Union[Dict[str, np.ndarray], List[np.ndarray], np.ndarray]) -> OrderedDict[str, np.ndarray]:
        """
        推理过程:
        1. 创建执行上下文
        2. 设置输入形状
        3. 分配内存
        4. 复制输入数据到设备
        5. 在设备上运行推理
        6. 将输出数据复制到主机并调整形状
        """
        # 设置输入形状,输出形状会自动推断
        if isinstance(inputs, np.ndarray):
            inputs = [inputs]
        if isinstance(inputs, dict):
            inputs = [inp if name in self.input_tensor_names else None for (name, inp) in inputs.items()]
        if isinstance(inputs, list):
            for name, arr in zip(self.input_tensor_names, inputs):
                self.context.set_input_shape(name, arr.shape)

        buffers_host = []
        buffers_device = []
        # 复制输入数据到设备
        for name, arr in zip(self.input_tensor_names, inputs):
            host = cuda.pagelocked_empty(arr.shape, dtype=trt.nptype(self.engine.get_tensor_dtype(name)))
            device = cuda.mem_alloc(arr.nbytes)

            host[:] = arr
            cuda.memcpy_htod_async(device, host, self.stream)
            buffers_host.append(host)
            buffers_device.append(device)

            # 设置输入张量地址
            self.context.set_tensor_address(name, int(device))

        # 设置输出张量分配器
        for name in self.output_tensor_names:
            self.context.set_tensor_address(name, 0)  # 设置为空指针
            self.context.set_output_allocator(name, self.output_allocator)

        # 记录开始事件
        self.start_event.record(self.stream)
        # 运行推理
        # 使用默认流可能会导致性能问题,因为 TensorRT 需要额外调用 cudaDeviceSynchronize() 来确保正确同步。请改用非默认流。
        self.context.execute_async_v3(stream_handle=self.stream.handle)
        # 记录结束事件
        self.end_event.record(self.stream)
        output_buffers = OrderedDict()
        for name in self.output_tensor_names:
            arr = cuda.pagelocked_empty(self.output_allocator.shapes[name], dtype=trt.nptype(self.engine.get_tensor_dtype(name)))
            cuda.memcpy_dtoh_async(arr, self.output_allocator.buffers[name], stream=self.stream)
            output_buffers[name] = arr

        # 同步流
        self.stream.synchronize()

        return output_buffers

除了增大batch_size还有能让这个infer更快的方法吗? 如果您有时间可以给我一些建议的话,我将会非常感激!!

izmttk commented 2 days ago

据我所知v2和v3的接口在推理速度上没有差别。频繁分配释放内存应该会对性能有不小的影响,如果你要多次调用infer,你可以考虑提前分配内存,但是这个特性我好像在ProcessorV3中忘记实现了

CodeSnailss commented 2 days ago

据我所知v2和v3的接口在推理速度上没有差别。频繁分配释放内存应该会对性能有不小的影响,如果你要多次调用infer,你可以考虑提前分配内存,但是这个特性我好像在ProcessorV3中忘记实现了

是的!!我会频繁地调用infer,您的代码中好像没有释放内存的操作,每次调用infer应该都会创建一次内存,但是如果是提前设置内存缓冲区复用,应该可以节约一些时间。

非常感谢您的回复和建议!

CodeSnailss commented 1 day ago

作者大大,我在你的基础上改进了为推理结果的输出张量预先分配内存的方法

class PreallocatedOutputAllocator(trt.IOutputAllocator):
    def __init__(self, context: trt.IExecutionContext):
        super().__init__()
        self.buffers = {}
        self.shapes = {}
        # 初始化时为每个输出张量预分配内存
        for name in get_output_tensor_names(context.engine):
            max_size = context.get_max_output_size(name)
            self.buffers[name] = cuda.mem_alloc(max_size)

    def reallocate_output(self, tensor_name: str, memory: int, size: int, alignment: int) -> int:
        # 直接返回预分配的内存地址
        return int(self.buffers[tensor_name])

    def notify_shape(self, tensor_name: str, shape: trt.Dims):
        self.shapes[tensor_name] = tuple(shape)

下面推理的类修改了创建上下文的位置,让他传入输出张量预先分配内存的方法,也就是PreallocatedOutputAllocator。这样处理以后让这个程序的时间快了1~2s

class ProcessorV3:
    def __init__(self, engine: trt.ICudaEngine):
        # 选择第一个可用的 GPU 设备并创建上下文(可选)
        self.engine = engine
        # 创建执行上下文
        self.context = engine.create_execution_context()
        self.output_allocator = PreallocatedOutputAllocator(self.context)

        # # 创建执行上下文
        # self.output_allocator = OutputAllocator()
        # self.context = engine.create_execution_context()
        # 获取输入和输出张量名称
        self.input_tensor_names = get_input_tensor_names(engine)
        self.output_tensor_names = get_output_tensor_names(engine)
        # 创建流
        self.stream = cuda.Stream()
        # 创建 CUDA 事件
        self.start_event = cuda.Event()
        self.end_event = cuda.Event()

    def get_last_inference_time(self):
        return self.start_event.time_till(self.end_event)

    def infer(self, inputs: Union[Dict[str, np.ndarray], List[np.ndarray], np.ndarray]) -> OrderedDict[str, np.ndarray]:
        """
        推理过程:
        1. 创建执行上下文
        2. 设置输入形状
        3. 分配内存
        4. 复制输入数据到设备
        5. 在设备上运行推理
        6. 将输出数据复制到主机并调整形状
        """
        # 设置输入形状,输出形状会自动推断
        if isinstance(inputs, np.ndarray):
            inputs = [inputs]
        if isinstance(inputs, dict):
            inputs = [inp if name in self.input_tensor_names else None for (name, inp) in inputs.items()]
        if isinstance(inputs, list):
            for name, arr in zip(self.input_tensor_names, inputs):
                self.context.set_input_shape(name, arr.shape)

        buffers_host = []
        buffers_device = []
        # 复制输入数据到设备
        for name, arr in zip(self.input_tensor_names, inputs):
            host = cuda.pagelocked_empty(arr.shape, dtype=trt.nptype(self.engine.get_tensor_dtype(name)))
            device = cuda.mem_alloc(arr.nbytes)

            host[:] = arr
            cuda.memcpy_htod_async(device, host, self.stream)
            buffers_host.append(host)
            buffers_device.append(device)

            # 设置输入张量地址
            self.context.set_tensor_address(name, int(device))

        # 设置输出张量分配器
        for name in self.output_tensor_names:
            self.context.set_tensor_address(name, 0)  # 设置为空指针
            self.context.set_output_allocator(name, self.output_allocator)

        # 记录开始事件
        self.start_event.record(self.stream)
        # 运行推理
        # 使用默认流可能会导致性能问题,因为 TensorRT 需要额外调用 cudaDeviceSynchronize() 来确保正确同步。请改用非默认流。
        self.context.execute_async_v3(stream_handle=self.stream.handle)
        # 记录结束事件
        self.end_event.record(self.stream)
        output_buffers = OrderedDict()
        for name in self.output_tensor_names:
            arr = cuda.pagelocked_empty(self.output_allocator.shapes[name], dtype=trt.nptype(self.engine.get_tensor_dtype(name)))
            cuda.memcpy_dtoh_async(arr, self.output_allocator.buffers[name], stream=self.stream)
            output_buffers[name] = arr

        # 同步流
        self.stream.synchronize()

        return output_buffers

之后还有每次推理都重新分配页锁定内存的情况,我觉得这部分也可以优化为预分配内存并且重用页锁定内存。我根据输出张量预先分配内存的方法照猫画虎,但是好像总体的时间没有提升反而还变慢了。是因为页锁定内存已经是最优解了,还是说我的写法有问题,不适用于输入数据的预分配。

class PreallocatedInputAllocator:
    def __init__(self, context: trt.IExecutionContext):
        self.engine = context.engine
        self.buffers = {}
        self.shapes = {}
        self.host_buffers = {}

        # 为每个输入张量预分配内存
        for name in get_input_tensor_names(self.engine):
            # 确认这是输入张量
            assert self.engine.get_tensor_mode(name) == trt.TensorIOMode.INPUT

            # 获取张量形状和类型
            shape = self.engine.get_tensor_shape(name)
            shape = tuple(shape)  # 将Dims转换为tuple
            dtype = trt.nptype(self.engine.get_tensor_dtype(name))
            size = int(abs(np.prod(shape)) * dtype().itemsize)

            # 分配内存
            self.buffers[name] = cuda.mem_alloc(size)
            self.host_buffers[name] = cuda.pagelocked_empty(shape, dtype=dtype)

    def notify_shape(self, tensor_name: str, shape: trt.Dims):
        self.shapes[tensor_name] = tuple(shape)  # 这里也确保转换为tuple

感谢你的热心帮助!