Closed Harsh-Vavaiya closed 1 year ago
Hi @Harsh-Vavaiya, this is probably happening because tensorrt engines are platform dependent.
There should be a build_engine.py
script that takes the onnx and generate the trt engine on your machine.
@StefanoBerti should know the location of the script
Hi @Harsh-Vavaiya , as said by @andrearosasco you need to build the engine on the platform where you want to run the inference. You can find the scripts needed in modules/hpe/utils. Note that this is a work in progress, so the README is not updated (sorry for that). The updated work is on another repository, you may want to take a look there since the code is much more optimized and I plan to release much more. Do not hesitate to open another issue or write me an e-mail for other problems!
@StefanoBerti @andrearosasco Thank you, guys !!
Hi @StefanoBerti,
I want to use only your hpe extensively into other project. should I use this repo's code or your latest one.? and latest one works with new models (2023's) of metrabs?
Hi @Harsh-Vavaiya,
if you want just to use the HPE module, maybe it is bettere if you stick to this code since the other one is more complex.
You have to build the TRT engine starting from the ONNXs.
I uploaded the ONNXs here, donwnload them and place them inside modules/hpe/weights
and run the script modules/hpe/setup/7_create_engines.py
(you may want to check the path).
After that, you should be able to run the hpe.py
script.
Note that the creation of the engine requires some time (about 30 minutes).
@StefanoBerti Thank you for the ONNXs files !!
have you seen the index of metrabs? index
see there are new versions of models, I wanna test this one metrabs_eff2xl_y4_384px_800k_28ds.tar.gz)
please, give me your ideas on it. should I only change the input size in your code, or it'll take more efforts?
@Harsh-Vavaiya I didn't know that he released new models, thanks for the update! So it is needed to extract two ONNXs: the one of the backbone and the one of the heads. It shouldn't be too difficult anyway, you can see how I extracted the ONNXs in the files:
modules/hpe/setup/2_extract_bbone_heads.py
modules/hpe/setup/3_extract_bbone_onnx.py
modules/hpe/setup/4_extract_heads_onnx.py
It should be quite easy to make everything work. Now I don't have time to do it, but this is a thing that I will try for sure!
@StefanoBerti @andrearosasco I've used this command "docker run -it --rm --gpus=all -v "D:/amnt/Quidich_pose_estimation_poc/ISBFSAR":/home/ecub ecub:latest python modules/hpe/hpe.py"
and getting below error. Help me with this.