Open gustvcz opened 2 months ago
It seems like both the WarpingSpadeModel and the WarpingSpadeModel-fix error out when trying to convert from ONNX to TRT...
in particular, you get this error:
self.config.max_workspace_size = 12 * (2 ** 30) # 12 GB
[08/09/2024-06:30:33] [TRT] [W] onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/09/2024-06:30:33] [TRT] [E] [network.cpp::addGridSample::1537] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/network.cpp::addGridSample::1537, condition: input.getDimensions().nbDims == 4
It seems like both the WarpingSpadeModel and the WarpingSpadeModel-fix error out when trying to convert from ONNX to TRT...
in particular, you get this error:
self.config.max_workspace_size = 12 * (2 ** 30) # 12 GB [08/09/2024-06:30:33] [TRT] [W] onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [08/09/2024-06:30:33] [TRT] [E] [network.cpp::addGridSample::1537] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/network.cpp::addGridSample::1537, condition: input.getDimensions().nbDims == 4
It looks like you might not have the correct environment set up. Give me more information, such as your system, how you installed and set up the runtime environment, and whether you've downloaded the ONNX file.
@warmshao this is based off of the docker v2 that you recently released. Running with nvidia toolkit 11.8, H100. I downloaded the onnx files from the command in the readme and then ran the .sh script. Oddly enough the script works for all the files except this one. Not sure what might be the problem?
Hi, I'm using: Ubuntu 22.04 Docker image: shaoguo/faster_liveportrait:v2
This is what I get:
`(base) root@ee24f760d24a:~/FasterLivePortrait# python app.py --mode onnx --mp ERROR:albumentations.check_version:Error fetching version info Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1348, in do_open h.request(req.get_method(), req.selector, req.data, headers, File "/root/miniconda3/lib/python3.10/http/client.py", line 1283, in request self._send_request(method, url, body, headers, encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1329, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/root/miniconda3/lib/python3.10/http/client.py", line 976, in send self.connect() File "/root/miniconda3/lib/python3.10/http/client.py", line 1448, in connect super().connect() File "/root/miniconda3/lib/python3.10/http/client.py", line 942, in connect self.sock = self._create_connection( File "/root/miniconda3/lib/python3.10/socket.py", line 824, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): File "/root/miniconda3/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/site-packages/albumentations/check_version.py", line 29, in fetch_version_info with opener.open(url, timeout=2) as response: File "/root/miniconda3/lib/python3.10/urllib/request.py", line 519, in open response = self._open(req, data) File "/root/miniconda3/lib/python3.10/urllib/request.py", line 536, in _open result = self._call_chain(self.handle_open, protocol, protocol + File "/root/miniconda3/lib/python3.10/urllib/request.py", line 496, in _call_chain result = func(*args) File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1391, in https_open return self.do_open(http.client.HTTPSConnection, req, File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1351, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [Errno -3] Temporary failure in name resolution> load Human Model >>> loading model: warping_spade {'name': 'WarpingSpadeModel', 'predict_type': 'ort', 'model_path': './checkpoints/liveportrait_onnx/warping_spade.onnx'} Traceback (most recent call last): File "/root/FasterLivePortrait/app.py", line 32, in gradio_pipeline = GradioLivePortraitPipeline(infer_cfg) File "/root/FasterLivePortrait/src/pipelines/gradio_live_portrait_pipeline.py", line 32, in init super(GradioLivePortraitPipeline, self).init(cfg, kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 27, in init self.init(kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 31, in init self.init_models(is_animal=False, kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 51, in init_models self.model_dict[model_name] = getattr(models, self.cfg.models[model_name]["name"])( File "/root/FasterLivePortrait/src/models/warping_spade_model.py", line 20, in init super(WarpingSpadeModel, self).init(kwargs) File "/root/FasterLivePortrait/src/models/base_model.py", line 13, in init self.predictor = get_predictor(self.kwargs) File "/root/FasterLivePortrait/src/models/predictor.py", line 255, in get_predictor return OnnxRuntimePredictorSingleton(kwargs) File "/root/FasterLivePortrait/src/models/predictor.py", line 242, in new assert os.path.exists(model_path), "model path must exist!" AssertionError: model path must exist! Exception ignored in: <function BaseModel.del at 0x7fba1dfb2560> Traceback (most recent call last): File "/root/FasterLivePortrait/src/models/base_model.py", line 48, in del if self.predictor is not None: AttributeError: 'WarpingSpadeModel' object has no attribute 'predictor'`
It looks like you haven't downloaded all the ONNX models
@warmshao this is based off of the docker v2 that you recently released. Running with nvidia toolkit 11.8, H100. I downloaded the onnx files from the command in the readme and then ran the .sh script. Oddly enough the script works for all the files except this one. Not sure what might be the problem?
run python scripts/onnx2trt.py -o ./checkpoints/liveportrait_onnx/warping_spade-fix.onnx
and show me the log
@warmshao
(base) root@72eb18899165:~/FasterLivePortrait# python scripts/onnx2trt.py -o checkpoints/liveportrait_onnx/warping_spade-fix.onnx
[08/09/2024-03:57:11] [TRT] [I] [MemUsageChange] Init CUDA: CPU +152, GPU +0, now: CPU 163, GPU 511 (MiB)
[08/09/2024-03:57:17] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +2474, GPU +754, now: CPU 2714, GPU 1265 (MiB)
[08/09/2024-03:57:17] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
/root/FasterLivePortrait/scripts/onnx2trt.py:61: DeprecationWarning: Use set_memory_pool_limit instead.
self.config.max_workspace_size = 12 * (2 ** 30) # 12 GB
[08/09/2024-03:57:17] [TRT] [W] onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/09/2024-03:57:17] [TRT] [I] No importer registered for op: GridSample3D. Attempting to import as plugin.
[08/09/2024-03:57:17] [TRT] [I] Searching for plugin: GridSample3D, plugin_version: 1, plugin_namespace:
paddingMode: 0
interpolationMode: 0
[08/09/2024-03:57:17] [TRT] [I] Successfully created plugin: GridSample3D
output datatype: 0
output datatype: 0
[08/09/2024-03:57:17] [TRT] [I] No importer registered for op: GridSample3D. Attempting to import as plugin.
[08/09/2024-03:57:17] [TRT] [I] Searching for plugin: GridSample3D, plugin_version: 1, plugin_namespace:
paddingMode: 0
interpolationMode: 0
[08/09/2024-03:57:17] [TRT] [I] Successfully created plugin: GridSample3D
output datatype: 0
output datatype: 0
INFO:EngineBuilder:Network Description
INFO:EngineBuilder:Input 'feature_3d' with shape (1, 32, 16, 64, 64) and dtype DataType.FLOAT
INFO:EngineBuilder:Input 'kp_driving' with shape (1, 21, 3) and dtype DataType.FLOAT
INFO:EngineBuilder:Input 'kp_source' with shape (1, 21, 3) and dtype DataType.FLOAT
INFO:EngineBuilder:Output 'out' with shape (1, 3, 512, 512) and dtype DataType.FLOAT
/root/FasterLivePortrait/scripts/onnx2trt.py:109: DeprecationWarning: Use network created with NetworkDefinitionCreationFlag::EXPLICIT_BATCH flag instead.
self.builder.max_batch_size = 1
INFO:EngineBuilder:Building fp16 Engine in /root/FasterLivePortrait/checkpoints/liveportrait_onnx/warping_spade-fix.trt
/root/FasterLivePortrait/scripts/onnx2trt.py:132: DeprecationWarning: Use build_serialized_network instead.
with self.builder.build_engine(self.network, self.config) as engine, open(engine_path, "wb") as f:
[08/09/2024-04:05:37] [TRT] [I] Graph optimization time: 1.3683 seconds.
[08/09/2024-04:05:37] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 3574, GPU 2005 (MiB)
[08/09/2024-04:05:37] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +14, now: CPU 3574, GPU 2019 (MiB)
[08/09/2024-04:05:37] [TRT] [W] TensorRT was linked against cuDNN 8.9.0 but loaded cuDNN 8.5.0
[08/09/2024-04:05:37] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
Error in grid_sample_3d_cuda: CUDA driver version is insufficient for CUDA runtime version
Error in grid_sample_3d_cuda: CUDA driver version is insufficient for CUDA runtime version
[08/09/2024-04:06:36] [TRT] [W] No valid obedient candidate choices for node /dense_motion_network/GridSample that meet the preferred precision. The remaining candidate choices will be profiled.
[08/09/2024-04:06:36] [TRT] [E] 10: Could not find any implementation for node /dense_motion_network/GridSample.
[08/09/2024-04:06:36] [TRT] [E] 10: [optimizer.cpp::computeCosts::3869] Error Code 10: Internal Error (Could not find any implementation for node /dense_motion_network/GridSample.)
Traceback (most recent call last):
File "/root/FasterLivePortrait/scripts/onnx2trt.py", line 161, in <module>
main(args)
File "/root/FasterLivePortrait/scripts/onnx2trt.py", line 140, in main
builder.create_engine(
File "/root/FasterLivePortrait/scripts/onnx2trt.py", line 132, in create_engine
with self.builder.build_engine(self.network, self.config) as engine, open(engine_path, "wb") as f:
AttributeError: __enter__
I believe it's to do with the nvidia toolkit version, which I guess is reference to my other thread as-well, how can I load a cuda toolkit that's compatible with H100 (which only supports 11.8+) straight from the docker image and not manually...
I get similar outputs for the normal warping spade aswell:
(base) root@72eb18899165:~/FasterLivePortrait# python scripts/onnx2trt.py -o checkpoints/liveportrait_onnx/warping_spade.onnx
[08/09/2024-04:08:44] [TRT] [I] [MemUsageChange] Init CUDA: CPU +152, GPU +0, now: CPU 163, GPU 511 (MiB)
[08/09/2024-04:08:50] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +2474, GPU +754, now: CPU 2714, GPU 1265 (MiB)
[08/09/2024-04:08:50] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
/root/FasterLivePortrait/scripts/onnx2trt.py:61: DeprecationWarning: Use set_memory_pool_limit instead.
self.config.max_workspace_size = 12 * (2 ** 30) # 12 GB
[08/09/2024-04:08:51] [TRT] [W] onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/09/2024-04:08:51] [TRT] [E] [network.cpp::addGridSample::1537] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/network.cpp::addGridSample::1537, condition: input.getDimensions().nbDims == 4
)
Segmentation fault (core dumped)
@warmshao
(base) root@72eb18899165:~/FasterLivePortrait# python scripts/onnx2trt.py -o checkpoints/liveportrait_onnx/warping_spade-fix.onnx [08/09/2024-03:57:11] [TRT] [I] [MemUsageChange] Init CUDA: CPU +152, GPU +0, now: CPU 163, GPU 511 (MiB) [08/09/2024-03:57:17] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +2474, GPU +754, now: CPU 2714, GPU 1265 (MiB) [08/09/2024-03:57:17] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading /root/FasterLivePortrait/scripts/onnx2trt.py:61: DeprecationWarning: Use set_memory_pool_limit instead. self.config.max_workspace_size = 12 * (2 ** 30) # 12 GB [08/09/2024-03:57:17] [TRT] [W] onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [08/09/2024-03:57:17] [TRT] [I] No importer registered for op: GridSample3D. Attempting to import as plugin. [08/09/2024-03:57:17] [TRT] [I] Searching for plugin: GridSample3D, plugin_version: 1, plugin_namespace: paddingMode: 0 interpolationMode: 0 [08/09/2024-03:57:17] [TRT] [I] Successfully created plugin: GridSample3D output datatype: 0 output datatype: 0 [08/09/2024-03:57:17] [TRT] [I] No importer registered for op: GridSample3D. Attempting to import as plugin. [08/09/2024-03:57:17] [TRT] [I] Searching for plugin: GridSample3D, plugin_version: 1, plugin_namespace: paddingMode: 0 interpolationMode: 0 [08/09/2024-03:57:17] [TRT] [I] Successfully created plugin: GridSample3D output datatype: 0 output datatype: 0 INFO:EngineBuilder:Network Description INFO:EngineBuilder:Input 'feature_3d' with shape (1, 32, 16, 64, 64) and dtype DataType.FLOAT INFO:EngineBuilder:Input 'kp_driving' with shape (1, 21, 3) and dtype DataType.FLOAT INFO:EngineBuilder:Input 'kp_source' with shape (1, 21, 3) and dtype DataType.FLOAT INFO:EngineBuilder:Output 'out' with shape (1, 3, 512, 512) and dtype DataType.FLOAT /root/FasterLivePortrait/scripts/onnx2trt.py:109: DeprecationWarning: Use network created with NetworkDefinitionCreationFlag::EXPLICIT_BATCH flag instead. self.builder.max_batch_size = 1 INFO:EngineBuilder:Building fp16 Engine in /root/FasterLivePortrait/checkpoints/liveportrait_onnx/warping_spade-fix.trt /root/FasterLivePortrait/scripts/onnx2trt.py:132: DeprecationWarning: Use build_serialized_network instead. with self.builder.build_engine(self.network, self.config) as engine, open(engine_path, "wb") as f: [08/09/2024-04:05:37] [TRT] [I] Graph optimization time: 1.3683 seconds. [08/09/2024-04:05:37] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 3574, GPU 2005 (MiB) [08/09/2024-04:05:37] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +14, now: CPU 3574, GPU 2019 (MiB) [08/09/2024-04:05:37] [TRT] [W] TensorRT was linked against cuDNN 8.9.0 but loaded cuDNN 8.5.0 [08/09/2024-04:05:37] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored. Error in grid_sample_3d_cuda: CUDA driver version is insufficient for CUDA runtime version Error in grid_sample_3d_cuda: CUDA driver version is insufficient for CUDA runtime version [08/09/2024-04:06:36] [TRT] [W] No valid obedient candidate choices for node /dense_motion_network/GridSample that meet the preferred precision. The remaining candidate choices will be profiled. [08/09/2024-04:06:36] [TRT] [E] 10: Could not find any implementation for node /dense_motion_network/GridSample. [08/09/2024-04:06:36] [TRT] [E] 10: [optimizer.cpp::computeCosts::3869] Error Code 10: Internal Error (Could not find any implementation for node /dense_motion_network/GridSample.) Traceback (most recent call last): File "/root/FasterLivePortrait/scripts/onnx2trt.py", line 161, in <module> main(args) File "/root/FasterLivePortrait/scripts/onnx2trt.py", line 140, in main builder.create_engine( File "/root/FasterLivePortrait/scripts/onnx2trt.py", line 132, in create_engine with self.builder.build_engine(self.network, self.config) as engine, open(engine_path, "wb") as f: AttributeError: __enter__
I believe it's to do with the nvidia toolkit version, which I guess is reference to my other thread as-well, how can I load a cuda toolkit that's compatible with H100 (which only supports 11.8+) straight from the docker image and not manually...
I get similar outputs for the normal warping spade aswell:
(base) root@72eb18899165:~/FasterLivePortrait# python scripts/onnx2trt.py -o checkpoints/liveportrait_onnx/warping_spade.onnx [08/09/2024-04:08:44] [TRT] [I] [MemUsageChange] Init CUDA: CPU +152, GPU +0, now: CPU 163, GPU 511 (MiB) [08/09/2024-04:08:50] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +2474, GPU +754, now: CPU 2714, GPU 1265 (MiB) [08/09/2024-04:08:50] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading /root/FasterLivePortrait/scripts/onnx2trt.py:61: DeprecationWarning: Use set_memory_pool_limit instead. self.config.max_workspace_size = 12 * (2 ** 30) # 12 GB [08/09/2024-04:08:51] [TRT] [W] onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [08/09/2024-04:08:51] [TRT] [E] [network.cpp::addGridSample::1537] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/network.cpp::addGridSample::1537, condition: input.getDimensions().nbDims == 4 ) Segmentation fault (core dumped)
Hmm, it seems like a CUDA issue. You might need to build a new image with CUDA 18 or higher yourself. You can find the base image on Docker Hub under nvidia/cuda.
Hi, I'm using: Ubuntu 22.04 Docker image: shaoguo/faster_liveportrait:v2 This is what I get:
(base) root@ee24f760d24a:~/FasterLivePortrait# python app.py --mode onnx --mp ERROR:albumentations.check_version:Error fetching version info Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1348, in do_open h.request(req.get_method(), req.selector, req.data, headers, File "/root/miniconda3/lib/python3.10/http/client.py", line 1283, in request self._send_request(method, url, body, headers, encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1329, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/root/miniconda3/lib/python3.10/http/client.py", line 976, in send self.connect() File "/root/miniconda3/lib/python3.10/http/client.py", line 1448, in connect super().connect() File "/root/miniconda3/lib/python3.10/http/client.py", line 942, in connect self.sock = self._create_connection( File "/root/miniconda3/lib/python3.10/socket.py", line 824, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): File "/root/miniconda3/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -3] Temporary failure in name resolution During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/site-packages/albumentations/check_version.py", line 29, in fetch_version_info with opener.open(url, timeout=2) as response: File "/root/miniconda3/lib/python3.10/urllib/request.py", line 519, in open response = self._open(req, data) File "/root/miniconda3/lib/python3.10/urllib/request.py", line 536, in _open result = self._call_chain(self.handle_open, protocol, protocol + File "/root/miniconda3/lib/python3.10/urllib/request.py", line 496, in _call_chain result = func(*args) File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1391, in https_open return self.do_open(http.client.HTTPSConnection, req, File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1351, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [Errno -3] Temporary failure in name resolution> load Human Model >>> loading model: warping_spade {'name': 'WarpingSpadeModel', 'predict_type': 'ort', 'model_path': './checkpoints/liveportrait_onnx/warping_spade.onnx'} Traceback (most recent call last): File "/root/FasterLivePortrait/app.py", line 32, in gradio_pipeline = GradioLivePortraitPipeline(infer_cfg) File "/root/FasterLivePortrait/src/pipelines/gradio_live_portrait_pipeline.py", line 32, in **init** super(GradioLivePortraitPipeline, self).**init**(cfg, **kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 27, in **init** self.init(**kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 31, in init self.init_models(is_animal=False, **kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 51, in init_models self.model_dict[model_name] = getattr(models, self.cfg.models[model_name]["name"])( File "/root/FasterLivePortrait/src/models/warping_spade_model.py", line 20, in **init** super(WarpingSpadeModel, self).**init**(**kwargs) File "/root/FasterLivePortrait/src/models/base_model.py", line 13, in **init** self.predictor = get_predictor(**self.kwargs) File "/root/FasterLivePortrait/src/models/predictor.py", line 255, in get_predictor return OnnxRuntimePredictorSingleton(**kwargs) File "/root/FasterLivePortrait/src/models/predictor.py", line 242, in **new** assert os.path.exists(model_path), "model path must exist!" AssertionError: model path must exist! Exception ignored in: <function BaseModel.**del** at 0x7fba1dfb2560> Traceback (most recent call last): File "/root/FasterLivePortrait/src/models/base_model.py", line 48, in **del** if self.predictor is not None: AttributeError: 'WarpingSpadeModel' object has no attribute 'predictor'
It looks like you haven't downloaded all the ONNX models
You are right, I hadn't downloaded the ONNX models. Now with these models, I get this:
`(base) root@991c728edb21:~/FasterLivePortrait# python app.py --mode onnx --mp load Human Model >>> loading model: warping_spade {'name': 'WarpingSpadeModel', 'predict_type': 'ort', 'model_path': './checkpoints/liveportrait_onnx/warping_spade.onnx'} OnnxRuntime use ['CUDAExecutionProvider', 'CoreMLExecutionProvider', 'CPUExecutionProvider'] /root/miniconda3/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CoreMLExecutionProvider' is not in available provider names.Available providers: 'CUDAExecutionProvider, CPUExecutionProvider' warnings.warn( EP Error EP Error /opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char, const char, ERRTYPE, const char, const char, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:114 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char, const char, ERRTYPE, const char, const char, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 35: CUDA driver version is insufficient for CUDA runtime version ; GPU=32761 ; hostname=991c728edb21 ; file=/opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_executionprovider.cc ; line=245 ; expr=cudaSetDevice(info.device_id);
when using ['CUDAExecutionProvider', 'CoreMLExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CPUExecutionProvider'] and retrying.
Traceback (most recent call last):
File "/root/FasterLivePortrait/app.py", line 32, in
I have CUDA Toolkit 12.5, my CUDA_ARCHITECTURE is 61, my GPU is GTX1080Ti.
In the next version of the Docker image, could you include compatibility with the 1080Ti models, with cuda architecture 61?
Hi, I'm using: Ubuntu 22.04 Docker image: shaoguo/faster_liveportrait:v2 This is what I get:
(base) root@ee24f760d24a:~/FasterLivePortrait# python app.py --mode onnx --mp ERROR:albumentations.check_version:Error fetching version info Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1348, in do_open h.request(req.get_method(), req.selector, req.data, headers, File "/root/miniconda3/lib/python3.10/http/client.py", line 1283, in request self._send_request(method, url, body, headers, encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1329, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/root/miniconda3/lib/python3.10/http/client.py", line 976, in send self.connect() File "/root/miniconda3/lib/python3.10/http/client.py", line 1448, in connect super().connect() File "/root/miniconda3/lib/python3.10/http/client.py", line 942, in connect self.sock = self._create_connection( File "/root/miniconda3/lib/python3.10/socket.py", line 824, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): File "/root/miniconda3/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -3] Temporary failure in name resolution During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/site-packages/albumentations/check_version.py", line 29, in fetch_version_info with opener.open(url, timeout=2) as response: File "/root/miniconda3/lib/python3.10/urllib/request.py", line 519, in open response = self._open(req, data) File "/root/miniconda3/lib/python3.10/urllib/request.py", line 536, in _open result = self._call_chain(self.handle_open, protocol, protocol + File "/root/miniconda3/lib/python3.10/urllib/request.py", line 496, in _call_chain result = func(*args) File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1391, in https_open return self.do_open(http.client.HTTPSConnection, req, File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1351, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [Errno -3] Temporary failure in name resolution> load Human Model >>> loading model: warping_spade {'name': 'WarpingSpadeModel', 'predict_type': 'ort', 'model_path': './checkpoints/liveportrait_onnx/warping_spade.onnx'} Traceback (most recent call last): File "/root/FasterLivePortrait/app.py", line 32, in gradio_pipeline = GradioLivePortraitPipeline(infer_cfg) File "/root/FasterLivePortrait/src/pipelines/gradio_live_portrait_pipeline.py", line 32, in **init** super(GradioLivePortraitPipeline, self).**init**(cfg, **kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 27, in **init** self.init(**kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 31, in init self.init_models(is_animal=False, **kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 51, in init_models self.model_dict[model_name] = getattr(models, self.cfg.models[model_name]["name"])( File "/root/FasterLivePortrait/src/models/warping_spade_model.py", line 20, in **init** super(WarpingSpadeModel, self).**init**(**kwargs) File "/root/FasterLivePortrait/src/models/base_model.py", line 13, in **init** self.predictor = get_predictor(**self.kwargs) File "/root/FasterLivePortrait/src/models/predictor.py", line 255, in get_predictor return OnnxRuntimePredictorSingleton(**kwargs) File "/root/FasterLivePortrait/src/models/predictor.py", line 242, in **new** assert os.path.exists(model_path), "model path must exist!" AssertionError: model path must exist! Exception ignored in: <function BaseModel.**del** at 0x7fba1dfb2560> Traceback (most recent call last): File "/root/FasterLivePortrait/src/models/base_model.py", line 48, in **del** if self.predictor is not None: AttributeError: 'WarpingSpadeModel' object has no attribute 'predictor'
It looks like you haven't downloaded all the ONNX models
You are right, I hadn't downloaded the ONNX models. Now with these models, I get this:
`(base) root@991c728edb21:~/FasterLivePortrait# python app.py --mode onnx --mp load Human Model >>> loading model: warping_spade {'name': 'WarpingSpadeModel', 'predict_type': 'ort', 'model_path': './checkpoints/liveportrait_onnx/warping_spade.onnx'} OnnxRuntime use ['CUDAExecutionProvider', 'CoreMLExecutionProvider', 'CPUExecutionProvider'] /root/miniconda3/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CoreMLExecutionProvider' is not in available provider names.Available providers: 'CUDAExecutionProvider, CPUExecutionProvider' warnings.warn( EP Error EP Error /opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char, const char, ERRTYPE, const char, const char, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:114 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char, const char, ERRTYPE, const char, const char, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 35: CUDA driver version is insufficient for CUDA runtime version ; GPU=32761 ; hostname=991c728edb21 ; file=/opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_executionprovider.cc ; line=245 ; expr=cudaSetDevice(info.device_id);
when using ['CUDAExecutionProvider', 'CoreMLExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CPUExecutionProvider'] and retrying.
Traceback (most recent call last): File "/root/FasterLivePortrait/app.py", line 32, in gradio_pipeline = GradioLivePortraitPipeline(infer_cfg) File "/root/FasterLivePortrait/src/pipelines/gradio_live_portrait_pipeline.py", line 32, in init super(GradioLivePortraitPipeline, self).init(cfg, kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 27, in init self.init(kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 31, in init self.init_models(kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 51, in init_models self.model_dict[model_name] = getattr(models, self.cfg.models[model_name]["name"])( File "/root/FasterLivePortrait/src/models/warping_spade_model.py", line 20, in init super(WarpingSpadeModel, self).init(kwargs) File "/root/FasterLivePortrait/src/models/base_model.py", line 14, in init self.device = torch.cuda.current_device() File "/root/miniconda3/lib/python3.10/site-packages/torch/cuda/init.py", line 674, in current_device _lazy_init() File "/root/miniconda3/lib/python3.10/site-packages/torch/cuda/init.py", line 247, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx `
I have CUDA Toolkit 12.5, my CUDA_ARCHITECTURE is 61, my GPU is GTX1080Ti.
In the next version of the Docker image, could you include compatibility with the 1080Ti models, with cuda architecture 61?
It seems to be an issue with the soft link being invalid. You can first refer to this link for resolving the issue of "Found no NVIDIA driver on your system.": https://github.com/warmshao/FasterLivePortrait/issues/8
Hi, I'm using: Ubuntu 22.04 Docker image: shaoguo/faster_liveportrait:v2 This is what I get:
(base) root@ee24f760d24a:~/FasterLivePortrait# python app.py --mode onnx --mp ERROR:albumentations.check_version:Error fetching version info Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1348, in do_open h.request(req.get_method(), req.selector, req.data, headers, File "/root/miniconda3/lib/python3.10/http/client.py", line 1283, in request self._send_request(method, url, body, headers, encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1329, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/root/miniconda3/lib/python3.10/http/client.py", line 976, in send self.connect() File "/root/miniconda3/lib/python3.10/http/client.py", line 1448, in connect super().connect() File "/root/miniconda3/lib/python3.10/http/client.py", line 942, in connect self.sock = self._create_connection( File "/root/miniconda3/lib/python3.10/socket.py", line 824, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): File "/root/miniconda3/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -3] Temporary failure in name resolution During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/site-packages/albumentations/check_version.py", line 29, in fetch_version_info with opener.open(url, timeout=2) as response: File "/root/miniconda3/lib/python3.10/urllib/request.py", line 519, in open response = self._open(req, data) File "/root/miniconda3/lib/python3.10/urllib/request.py", line 536, in _open result = self._call_chain(self.handle_open, protocol, protocol + File "/root/miniconda3/lib/python3.10/urllib/request.py", line 496, in _call_chain result = func(*args) File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1391, in https_open return self.do_open(http.client.HTTPSConnection, req, File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1351, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [Errno -3] Temporary failure in name resolution> load Human Model >>> loading model: warping_spade {'name': 'WarpingSpadeModel', 'predict_type': 'ort', 'model_path': './checkpoints/liveportrait_onnx/warping_spade.onnx'} Traceback (most recent call last): File "/root/FasterLivePortrait/app.py", line 32, in gradio_pipeline = GradioLivePortraitPipeline(infer_cfg) File "/root/FasterLivePortrait/src/pipelines/gradio_live_portrait_pipeline.py", line 32, in **init** super(GradioLivePortraitPipeline, self).**init**(cfg, **kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 27, in **init** self.init(**kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 31, in init self.init_models(is_animal=False, **kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 51, in init_models self.model_dict[model_name] = getattr(models, self.cfg.models[model_name]["name"])( File "/root/FasterLivePortrait/src/models/warping_spade_model.py", line 20, in **init** super(WarpingSpadeModel, self).**init**(**kwargs) File "/root/FasterLivePortrait/src/models/base_model.py", line 13, in **init** self.predictor = get_predictor(**self.kwargs) File "/root/FasterLivePortrait/src/models/predictor.py", line 255, in get_predictor return OnnxRuntimePredictorSingleton(**kwargs) File "/root/FasterLivePortrait/src/models/predictor.py", line 242, in **new** assert os.path.exists(model_path), "model path must exist!" AssertionError: model path must exist! Exception ignored in: <function BaseModel.**del** at 0x7fba1dfb2560> Traceback (most recent call last): File "/root/FasterLivePortrait/src/models/base_model.py", line 48, in **del** if self.predictor is not None: AttributeError: 'WarpingSpadeModel' object has no attribute 'predictor'
It looks like you haven't downloaded all the ONNX models
You are right, I hadn't downloaded the ONNX models. Now with these models, I get this:
(base) root@991c728edb21:~/FasterLivePortrait# python app.py --mode onnx --mp load Human Model >>> loading model: warping_spade {'name': 'WarpingSpadeModel', 'predict_type': 'ort', 'model_path': './checkpoints/liveportrait_onnx/warping_spade.onnx'} OnnxRuntime use ['CUDAExecutionProvider', 'CoreMLExecutionProvider', 'CPUExecutionProvider'] /root/miniconda3/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CoreMLExecutionProvider' is not in available provider names.Available providers: 'CUDAExecutionProvider, CPUExecutionProvider' warnings.warn( *************** EP Error *************** EP Error /opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:114 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 35: CUDA driver version is insufficient for CUDA runtime version ; GPU=32761 ; hostname=991c728edb21 ; file=/opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_execution_provider.cc ; line=245 ; expr=cudaSetDevice(info_.device_id); when using ['CUDAExecutionProvider', 'CoreMLExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CPUExecutionProvider'] and retrying. Traceback (most recent call last): File "/root/FasterLivePortrait/app.py", line 32, in gradio_pipeline = GradioLivePortraitPipeline(infer_cfg) File "/root/FasterLivePortrait/src/pipelines/gradio_live_portrait_pipeline.py", line 32, in **init** super(GradioLivePortraitPipeline, self).**init**(cfg, **kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 27, in **init** self.init(**kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 31, in init self.init_models(**kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 51, in init_models self.model_dict[model_name] = getattr(models, self.cfg.models[model_name]["name"])( File "/root/FasterLivePortrait/src/models/warping_spade_model.py", line 20, in **init** super(WarpingSpadeModel, self).**init**(**kwargs) File "/root/FasterLivePortrait/src/models/base_model.py", line 14, in **init** self.device = torch.cuda.current_device() File "/root/miniconda3/lib/python3.10/site-packages/torch/cuda/**init**.py", line 674, in current_device _lazy_init() File "/root/miniconda3/lib/python3.10/site-packages/torch/cuda/**init**.py", line 247, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
I have CUDA Toolkit 12.5, my CUDA_ARCHITECTURE is 61, my GPU is GTX1080Ti. In the next version of the Docker image, could you include compatibility with the 1080Ti models, with cuda architecture 61?It seems to be an issue with the soft link being invalid. You can first refer to this link for resolving the issue of "Found no NVIDIA driver on your system.": #8
You are right, again. Now, when I select a demo image and video, and press the 'Animate' button, I get this:
add source:/tmp/gradio/04a5523d1f8b3a6482d6f46614cffbf8f08194d8/s5.jpeg to infer cfg add driving:/tmp/gradio/676c6c163d0c6f5d3c4776b01172d378a2d611ef/d6.mp4 to infer cfg update infer cfg flag_relative_motion from True to True update infer cfg flag_do_crop from True to True update infer cfg flag_pasteback from True to True update infer cfg driving_multiplier from 1.0 to 1 update infer cfg flag_stitching from True to True update infer cfg flag_crop_driving_video from False to False update infer cfg flag_video_editing_head_rotation from False to False update crop cfg src_scale from 2.3 to 2.3 update crop cfg src_vx_ratio from 0.0 to 0 update crop cfg src_vy_ratio from -0.125 to -0.125 update crop cfg dri_scale from 2.2 to 2.2 update crop cfg dri_vx_ratio from 0.0 to 0 update crop cfg dri_vy_ratio from -0.1 to -0.1 update infer cfg driving_smooth_observation_variance from 1e-07 to 1e-07 process source:/tmp/gradio/04a5523d1f8b3a6482d6f46614cffbf8f08194d8/s5.jpeg >>>>>>>> 0%| | 0/1 [00:00<?, ?it/s]2024-08-10 23:29:59.590152112 [E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running Relu node. Name:'Relu_2' Status Message: CUDA error cudaErrorNoKernelImageForDevice:no kernel image is available for execution on the device 0%| | 0/1 [00:00<?, ?it/s] Traceback (most recent call last): File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 147, in prepare_source src_faces = self.model_dict["face_analysis"].predict(img_bgr) File "/root/FasterLivePortrait/src/models/face_analysis_model.py", line 309, in predict bboxes, kpss = self.detect_face(*data) File "/root/FasterLivePortrait/src/models/face_analysis_model.py", line 206, in detect_face o448, o471, o494, o451, o474, o497, o454, o477, o500 = self.face_det.predict(det_img[None]) File "/root/FasterLivePortrait/src/models/predictor.py", line 225, in predict results = self.onnx_model.run(None, input_feeds) File "/root/miniconda3/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 220, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Relu node. Name:'Relu_2' Status Message: CUDA error cudaErrorNoKernelImageForDevice:no kernel image is available for execution on the device Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events response = await route_utils.call_process_api( File "/root/miniconda3/lib/python3.10/site-packages/gradio/route_utils.py", line 276, in call_process_api output = await app.get_blocks().process_api( File "/root/miniconda3/lib/python3.10/site-packages/gradio/blocks.py", line 1897, in process_api result = await self.call_function( File "/root/miniconda3/lib/python3.10/site-packages/gradio/blocks.py", line 1483, in call_function prediction = await anyio.to_thread.run_sync( File "/root/miniconda3/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/root/miniconda3/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread return await future File "/root/miniconda3/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run result = context.run(func, *args) File "/root/miniconda3/lib/python3.10/site-packages/gradio/utils.py", line 816, in wrapper response = f(*args, **kwargs) File "/root/FasterLivePortrait/app.py", line 36, in gpu_wrapped_execute_video return gradio_pipeline.execute_video(*args, **kwargs) File "/root/FasterLivePortrait/src/pipelines/gradio_live_portrait_pipeline.py", line 110, in execute_video video_path, video_path_concat, total_time = self.run_local(input_driving_video_path, input_source_path, File "/root/FasterLivePortrait/src/pipelines/gradio_live_portrait_pipeline.py", line 125, in run_local raise gr.Error(f"Error in processing source:{source_path} 💥!", duration=5) gradio.exceptions.Error: 'Error in processing source:/tmp/gradio/04a5523d1f8b3a6482d6f46614cffbf8f08194d8/s5.jpeg 💥!'
Hi, I'm using: Ubuntu 22.04 Docker image: shaoguo/faster_liveportrait:v2 This is what I get:
(base) root@ee24f760d24a:~/FasterLivePortrait# python app.py --mode onnx --mp ERROR:albumentations.check_version:Error fetching version info Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1348, in do_open h.request(req.get_method(), req.selector, req.data, headers, File "/root/miniconda3/lib/python3.10/http/client.py", line 1283, in request self._send_request(method, url, body, headers, encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1329, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/root/miniconda3/lib/python3.10/http/client.py", line 976, in send self.connect() File "/root/miniconda3/lib/python3.10/http/client.py", line 1448, in connect super().connect() File "/root/miniconda3/lib/python3.10/http/client.py", line 942, in connect self.sock = self._create_connection( File "/root/miniconda3/lib/python3.10/socket.py", line 824, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): File "/root/miniconda3/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -3] Temporary failure in name resolution During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/site-packages/albumentations/check_version.py", line 29, in fetch_version_info with opener.open(url, timeout=2) as response: File "/root/miniconda3/lib/python3.10/urllib/request.py", line 519, in open response = self._open(req, data) File "/root/miniconda3/lib/python3.10/urllib/request.py", line 536, in _open result = self._call_chain(self.handle_open, protocol, protocol + File "/root/miniconda3/lib/python3.10/urllib/request.py", line 496, in _call_chain result = func(*args) File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1391, in https_open return self.do_open(http.client.HTTPSConnection, req, File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1351, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [Errno -3] Temporary failure in name resolution> load Human Model >>> loading model: warping_spade {'name': 'WarpingSpadeModel', 'predict_type': 'ort', 'model_path': './checkpoints/liveportrait_onnx/warping_spade.onnx'} Traceback (most recent call last): File "/root/FasterLivePortrait/app.py", line 32, in gradio_pipeline = GradioLivePortraitPipeline(infer_cfg) File "/root/FasterLivePortrait/src/pipelines/gradio_live_portrait_pipeline.py", line 32, in **init** super(GradioLivePortraitPipeline, self).**init**(cfg, **kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 27, in **init** self.init(**kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 31, in init self.init_models(is_animal=False, **kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 51, in init_models self.model_dict[model_name] = getattr(models, self.cfg.models[model_name]["name"])( File "/root/FasterLivePortrait/src/models/warping_spade_model.py", line 20, in **init** super(WarpingSpadeModel, self).**init**(**kwargs) File "/root/FasterLivePortrait/src/models/base_model.py", line 13, in **init** self.predictor = get_predictor(**self.kwargs) File "/root/FasterLivePortrait/src/models/predictor.py", line 255, in get_predictor return OnnxRuntimePredictorSingleton(**kwargs) File "/root/FasterLivePortrait/src/models/predictor.py", line 242, in **new** assert os.path.exists(model_path), "model path must exist!" AssertionError: model path must exist! Exception ignored in: <function BaseModel.**del** at 0x7fba1dfb2560> Traceback (most recent call last): File "/root/FasterLivePortrait/src/models/base_model.py", line 48, in **del** if self.predictor is not None: AttributeError: 'WarpingSpadeModel' object has no attribute 'predictor'
It looks like you haven't downloaded all the ONNX models
You are right, I hadn't downloaded the ONNX models. Now with these models, I get this:
(base) root@991c728edb21:~/FasterLivePortrait# python app.py --mode onnx --mp load Human Model >>> loading model: warping_spade {'name': 'WarpingSpadeModel', 'predict_type': 'ort', 'model_path': './checkpoints/liveportrait_onnx/warping_spade.onnx'} OnnxRuntime use ['CUDAExecutionProvider', 'CoreMLExecutionProvider', 'CPUExecutionProvider'] /root/miniconda3/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CoreMLExecutionProvider' is not in available provider names.Available providers: 'CUDAExecutionProvider, CPUExecutionProvider' warnings.warn( *************** EP Error *************** EP Error /opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:114 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 35: CUDA driver version is insufficient for CUDA runtime version ; GPU=32761 ; hostname=991c728edb21 ; file=/opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_execution_provider.cc ; line=245 ; expr=cudaSetDevice(info_.device_id); when using ['CUDAExecutionProvider', 'CoreMLExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CPUExecutionProvider'] and retrying. Traceback (most recent call last): File "/root/FasterLivePortrait/app.py", line 32, in gradio_pipeline = GradioLivePortraitPipeline(infer_cfg) File "/root/FasterLivePortrait/src/pipelines/gradio_live_portrait_pipeline.py", line 32, in **init** super(GradioLivePortraitPipeline, self).**init**(cfg, **kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 27, in **init** self.init(**kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 31, in init self.init_models(**kwargs) File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 51, in init_models self.model_dict[model_name] = getattr(models, self.cfg.models[model_name]["name"])( File "/root/FasterLivePortrait/src/models/warping_spade_model.py", line 20, in **init** super(WarpingSpadeModel, self).**init**(**kwargs) File "/root/FasterLivePortrait/src/models/base_model.py", line 14, in **init** self.device = torch.cuda.current_device() File "/root/miniconda3/lib/python3.10/site-packages/torch/cuda/**init**.py", line 674, in current_device _lazy_init() File "/root/miniconda3/lib/python3.10/site-packages/torch/cuda/**init**.py", line 247, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
I have CUDA Toolkit 12.5, my CUDA_ARCHITECTURE is 61, my GPU is GTX1080Ti. In the next version of the Docker image, could you include compatibility with the 1080Ti models, with cuda architecture 61?It seems to be an issue with the soft link being invalid. You can first refer to this link for resolving the issue of "Found no NVIDIA driver on your system.": #8
You are right, again. Now, when I select a demo image and video, and press the 'Animate' button, I get this:
add source:/tmp/gradio/04a5523d1f8b3a6482d6f46614cffbf8f08194d8/s5.jpeg to infer cfg add driving:/tmp/gradio/676c6c163d0c6f5d3c4776b01172d378a2d611ef/d6.mp4 to infer cfg update infer cfg flag_relative_motion from True to True update infer cfg flag_do_crop from True to True update infer cfg flag_pasteback from True to True update infer cfg driving_multiplier from 1.0 to 1 update infer cfg flag_stitching from True to True update infer cfg flag_crop_driving_video from False to False update infer cfg flag_video_editing_head_rotation from False to False update crop cfg src_scale from 2.3 to 2.3 update crop cfg src_vx_ratio from 0.0 to 0 update crop cfg src_vy_ratio from -0.125 to -0.125 update crop cfg dri_scale from 2.2 to 2.2 update crop cfg dri_vx_ratio from 0.0 to 0 update crop cfg dri_vy_ratio from -0.1 to -0.1 update infer cfg driving_smooth_observation_variance from 1e-07 to 1e-07 process source:/tmp/gradio/04a5523d1f8b3a6482d6f46614cffbf8f08194d8/s5.jpeg >>>>>>>> 0%| | 0/1 [00:00<?, ?it/s]2024-08-10 23:29:59.590152112 [E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running Relu node. Name:'Relu_2' Status Message: CUDA error cudaErrorNoKernelImageForDevice:no kernel image is available for execution on the device 0%| | 0/1 [00:00<?, ?it/s] Traceback (most recent call last): File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 147, in prepare_source src_faces = self.model_dict["face_analysis"].predict(img_bgr) File "/root/FasterLivePortrait/src/models/face_analysis_model.py", line 309, in predict bboxes, kpss = self.detect_face(*data) File "/root/FasterLivePortrait/src/models/face_analysis_model.py", line 206, in detect_face o448, o471, o494, o451, o474, o497, o454, o477, o500 = self.face_det.predict(det_img[None]) File "/root/FasterLivePortrait/src/models/predictor.py", line 225, in predict results = self.onnx_model.run(None, input_feeds) File "/root/miniconda3/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 220, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Relu node. Name:'Relu_2' Status Message: CUDA error cudaErrorNoKernelImageForDevice:no kernel image is available for execution on the device Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events response = await route_utils.call_process_api( File "/root/miniconda3/lib/python3.10/site-packages/gradio/route_utils.py", line 276, in call_process_api output = await app.get_blocks().process_api( File "/root/miniconda3/lib/python3.10/site-packages/gradio/blocks.py", line 1897, in process_api result = await self.call_function( File "/root/miniconda3/lib/python3.10/site-packages/gradio/blocks.py", line 1483, in call_function prediction = await anyio.to_thread.run_sync( File "/root/miniconda3/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/root/miniconda3/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread return await future File "/root/miniconda3/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run result = context.run(func, *args) File "/root/miniconda3/lib/python3.10/site-packages/gradio/utils.py", line 816, in wrapper response = f(*args, **kwargs) File "/root/FasterLivePortrait/app.py", line 36, in gpu_wrapped_execute_video return gradio_pipeline.execute_video(*args, **kwargs) File "/root/FasterLivePortrait/src/pipelines/gradio_live_portrait_pipeline.py", line 110, in execute_video video_path, video_path_concat, total_time = self.run_local(input_driving_video_path, input_source_path, File "/root/FasterLivePortrait/src/pipelines/gradio_live_portrait_pipeline.py", line 125, in run_local raise gr.Error(f"Error in processing source:{source_path} 💥!", duration=5) gradio.exceptions.Error: 'Error in processing source:/tmp/gradio/04a5523d1f8b3a6482d6f46614cffbf8f08194d8/s5.jpeg 💥!'
I did not compile the onnxruntime-gpu with the architecture of 61. You can compile it yourself according to the readme and add the 61.
./build.sh --parallel \ --build_shared_lib --use_cuda \ --cuda_version 11.8 \ --cuda_home /usr/local/cuda --cudnn_home /usr/local/cuda/ \ --config Release --build_wheel --skip_tests \ --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES="60;61;70;75;80;86" \ --cmake_extra_defines CMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc \ --disable_contrib_ops \ --allow_running_as_root
Hi, I'm using: Ubuntu 22.04 Docker image: shaoguo/faster_liveportrait:v2
This is what I get:
`(base) root@ee24f760d24a:~/FasterLivePortrait# python app.py --mode onnx --mp ERROR:albumentations.check_version:Error fetching version info Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1348, in do_open h.request(req.get_method(), req.selector, req.data, headers, File "/root/miniconda3/lib/python3.10/http/client.py", line 1283, in request self._send_request(method, url, body, headers, encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1329, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1278, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/root/miniconda3/lib/python3.10/http/client.py", line 1038, in _send_output self.send(msg) File "/root/miniconda3/lib/python3.10/http/client.py", line 976, in send self.connect() File "/root/miniconda3/lib/python3.10/http/client.py", line 1448, in connect super().connect() File "/root/miniconda3/lib/python3.10/http/client.py", line 942, in connect self.sock = self._create_connection( File "/root/miniconda3/lib/python3.10/socket.py", line 824, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): File "/root/miniconda3/lib/python3.10/socket.py", line 955, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/site-packages/albumentations/check_version.py", line 29, in fetch_version_info with opener.open(url, timeout=2) as response: File "/root/miniconda3/lib/python3.10/urllib/request.py", line 519, in open response = self._open(req, data) File "/root/miniconda3/lib/python3.10/urllib/request.py", line 536, in _open result = self._call_chain(self.handle_open, protocol, protocol + File "/root/miniconda3/lib/python3.10/urllib/request.py", line 496, in _call_chain result = func(*args) File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1391, in https_open return self.do_open(http.client.HTTPSConnection, req, File "/root/miniconda3/lib/python3.10/urllib/request.py", line 1351, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [Errno -3] Temporary failure in name resolution> load Human Model >>> loading model: warping_spade {'name': 'WarpingSpadeModel', 'predict_type': 'ort', 'model_path': './checkpoints/liveportrait_onnx/warping_spade.onnx'} Traceback (most recent call last): File "/root/FasterLivePortrait/app.py", line 32, in
gradio_pipeline = GradioLivePortraitPipeline(infer_cfg)
File "/root/FasterLivePortrait/src/pipelines/gradio_live_portrait_pipeline.py", line 32, in init
super(GradioLivePortraitPipeline, self).init(cfg, kwargs)
File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 27, in init
self.init(kwargs)
File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 31, in init
self.init_models(is_animal=False, kwargs)
File "/root/FasterLivePortrait/src/pipelines/faster_live_portrait_pipeline.py", line 51, in init_models
self.model_dict[model_name] = getattr(models, self.cfg.models[model_name]["name"])(
File "/root/FasterLivePortrait/src/models/warping_spade_model.py", line 20, in init
super(WarpingSpadeModel, self).init(kwargs)
File "/root/FasterLivePortrait/src/models/base_model.py", line 13, in init
self.predictor = get_predictor(self.kwargs)
File "/root/FasterLivePortrait/src/models/predictor.py", line 255, in get_predictor
return OnnxRuntimePredictorSingleton(kwargs)
File "/root/FasterLivePortrait/src/models/predictor.py", line 242, in new
assert os.path.exists(model_path), "model path must exist!"
AssertionError: model path must exist!
Exception ignored in: <function BaseModel.del at 0x7fba1dfb2560>
Traceback (most recent call last):
File "/root/FasterLivePortrait/src/models/base_model.py", line 48, in del
if self.predictor is not None:
AttributeError: 'WarpingSpadeModel' object has no attribute 'predictor'`