steb6 / ISBFSAR

Interactive Skeleton Based Few Shot Action Recognition
14 stars 2 forks source link

Getting error in loading the engine #5

Closed Harsh-Vavaiya closed 1 year ago

Harsh-Vavaiya commented 1 year ago

@StefanoBerti @andrearosasco I've used this command "docker run -it --rm --gpus=all -v "D:/amnt/Quidich_pose_estimation_poc/ISBFSAR":/home/ecub ecub:latest python modules/hpe/hpe.py"

and getting below error. Help me with this.

==========
== CUDA ==
==========

CUDA Version 11.3.1

Container image Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

Ubuntu: True
2023-03-27 07:23:06.763 | INFO     | utils.tensorrt_runner:__init__:22 - Loading yolo engine...
[03/27/2023-07:23:07] [TRT] [E] 1: [stdArchiveReader.cpp::StdArchiveReader::42] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 232, Serialized Engine Version: 213)
[03/27/2023-07:23:07] [TRT] [E] 4: [runtime.cpp::deserializeCudaEngine::66] Error Code 4: Internal Error (Engine deserialization failed.)
Traceback (most recent call last):
  File "modules/hpe/hpe.py", line 181, in <module>
    h = HumanPoseEstimator(MetrabsTRTConfig(), RealSenseIntrinsics())
  File "modules/hpe/hpe.py", line 42, in __init__
    self.yolo = Runner(model_config.yolo_engine_path)  # model_config.yolo_engine_path
  File "/home/ecub/utils/tensorrt_runner.py", line 36, in __init__
    for binding in engine:
TypeError: 'NoneType' object is not iterable
andrearosasco commented 1 year ago

Hi @Harsh-Vavaiya, this is probably happening because tensorrt engines are platform dependent.

There should be a build_engine.py script that takes the onnx and generate the trt engine on your machine.

@StefanoBerti should know the location of the script

steb6 commented 1 year ago

Hi @Harsh-Vavaiya , as said by @andrearosasco you need to build the engine on the platform where you want to run the inference. You can find the scripts needed in modules/hpe/utils. Note that this is a work in progress, so the README is not updated (sorry for that). The updated work is on another repository, you may want to take a look there since the code is much more optimized and I plan to release much more. Do not hesitate to open another issue or write me an e-mail for other problems!

Harsh-Vavaiya commented 1 year ago

@StefanoBerti @andrearosasco Thank you, guys !!

Harsh-Vavaiya commented 1 year ago

Hi @StefanoBerti,

I want to use only your hpe extensively into other project. should I use this repo's code or your latest one.? and latest one works with new models (2023's) of metrabs?

steb6 commented 1 year ago

Hi @Harsh-Vavaiya, if you want just to use the HPE module, maybe it is bettere if you stick to this code since the other one is more complex. You have to build the TRT engine starting from the ONNXs. I uploaded the ONNXs here, donwnload them and place them inside modules/hpe/weights and run the script modules/hpe/setup/7_create_engines.py (you may want to check the path). After that, you should be able to run the hpe.py script. Note that the creation of the engine requires some time (about 30 minutes).

Harsh-Vavaiya commented 1 year ago

@StefanoBerti Thank you for the ONNXs files !!

have you seen the index of metrabs? index

see there are new versions of models, I wanna test this one metrabs_eff2xl_y4_384px_800k_28ds.tar.gz)

please, give me your ideas on it. should I only change the input size in your code, or it'll take more efforts?

steb6 commented 1 year ago

@Harsh-Vavaiya I didn't know that he released new models, thanks for the update! So it is needed to extract two ONNXs: the one of the backbone and the one of the heads. It shouldn't be too difficult anyway, you can see how I extracted the ONNXs in the files:

It should be quite easy to make everything work. Now I don't have time to do it, but this is a thing that I will try for sure!