Open salahzoubi opened 1 month ago
It’s related to the plugin tensorrt , the current plugin is built on google colab with cuda 12.1 and tensorrt 10.2 , so if you run on local , please follow this repo to built plugin again and replace path to plugin 3d grid sample by your new plugin https://github.com/SeanWangJS/grid-sample3d-trt-plugin btw —task flag is —task image not —task [“image”] @salahzoubi
@aihacker111
thanks for the quick reply!
Seems like the repo above throws errors when building, particularly:
[ 20%] Building CXX object CMakeFiles/grid_sample_3d_plugin.dir/src/grid_sample_3d_plugin.cpp.o
/home/ubuntu/grid-sample3d-trt-plugin/src/grid_sample_3d_plugin.cpp:6:10: fatal error: NvInfer.h: No such file or directory
6 | #include <NvInfer.h>
| ^~~~~~~~~~~
compilation terminated.
make[2]: *** [CMakeFiles/grid_sample_3d_plugin.dir/build.make:76: CMakeFiles/grid_sample_3d_plugin.dir/src/grid_sample_3d_plugin.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:100: CMakeFiles/grid_sample_3d_plugin.dir/all] Error 2
make: *** [Makefile:101: all] Error 2
Any idea how to move from here?
@salahzoubi You should install tensorrt from the nvidia website not from pip, after install teénorrt, you should copy path to tensorrt folder , and then check the path to your cuda
Modify line 30 in CMakeLists.txt to: set_target_properties(${PROJECT_NAME} PROPERTIES CUDA_ARCHITECTURES "60;70;75;80;86")
Please make sure all the path is correct and run again
@aihacker111 I just tried multiple docker containers with tensorrt installed (older versions like 8.x and 10.x up until 10.2). I've tried replacing the libgrid.so file aswell, but still either get the same error mentioned, or
Downloaded successfully and already saved
Downloaded successfully and already saved
/usr/local/lib/python3.10/dist-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
[08/12/2024-18:27:54] [TRT] [E] 1: [stdArchiveReader.cpp::StdArchiveReaderInitCommon::46] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 236, Serialized Engine Version: 238)
root@1a7148ad5f41:~/LivePortrait# python run_live_portrait.py --driving_video './experiment_examples/examples/driving/d3.mp4' --source_image './experiment_examples/examples/source/s7.jpg' --task ‘webcam’ --run_time --half_precision
Downloaded successfully and already saved
Downloaded successfully and already saved
/usr/local/lib/python3.10/dist-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
[08/12/2024-18:34:57] [TRT] [E] 1: [stdArchiveReader.cpp::StdArchiveReaderInitCommon::46] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 236, Serialized Engine Version: 238)
Traceback (most recent call last):
File "/root/LivePortrait/run_live_portrait.py", line 64, in <module>
main(args.driving_video, args.source_image, args.source_video, args.condition_image, args.max_faces, args.run_time,
File "/root/LivePortrait/run_live_portrait.py", line 19, in main
live_portrait = EfficientLivePortrait(use_tensorrt, half_precision, cropping_video, **kwargs)
File "/root/LivePortrait/LivePortrait/fast_live_portrait_pipeline.py", line 16, in __init__
super().__init__(use_tensorrt, half, **kwargs)
File "/root/LivePortrait/LivePortrait/live_portrait/portrait.py", line 16, in __init__
self.predictor = EfficientLivePortraitPredictor(use_tensorrt, half, **kwargs)
File "/root/LivePortrait/LivePortrait/commons/predictor.py", line 16, in __init__
self.trt_engine = TensorRTEngine(self.half, **kwargs)
File "/root/LivePortrait/LivePortrait/commons/utils/tensorrt_driver.py", line 97, in __init__
self.initialize_engines()
File "/root/LivePortrait/LivePortrait/commons/utils/tensorrt_driver.py", line 107, in initialize_engines
raise RuntimeError(f"Failed to load engine for {model_name}")
RuntimeError: Failed to load engine for feature_extractor
which probably means that I need to re-export the onnx files to trt right? I see there's one script out there that does it for a particular file, is there a script that converts all the files? Unless there's some other fix involved?
I'm not sure how to deal with this error in particular? It seems like these models were built on a very particular version which I don't have access to? I'm running this on an H100 with cuda 11.8, any idea how to fix? Here's the exact command I'm using:
python run_live_portrait.py --driving_video 'experiment_examples/examples/driving/d5.mp4' --source_image 'experiment_examples/examples/source/s6.jpg' --task ['image']