Open Tinaisok opened 3 months ago
如果打开命令行,直接切换到LivePortrait目录再执行 python inference.py
会出现:
C:\Users\19535\LivePortrait>python inference.py Traceback (most recent call last): File "C:\Users\19535\LivePortrait\inference.py", line 13, in <module> from src.live_portrait_pipeline import LivePortraitPipeline File "C:\Users\19535\LivePortrait\src\live_portrait_pipeline.py", line 19, in <module> from .utils.cropper import Cropper File "C:\Users\19535\LivePortrait\src\utils\cropper.py", line 21, in <module> from .face_analysis_diy import FaceAnalysisDIY File "C:\Users\19535\LivePortrait\src\utils\face_analysis_diy.py", line 9, in <module> from .dependencies.insightface.app import FaceAnalysis File "C:\Users\19535\LivePortrait\src\utils\dependencies\insightface\__init__.py", line 16, in <module> from . import model_zoo File "C:\Users\19535\LivePortrait\src\utils\dependencies\insightface\model_zoo\__init__.py", line 1, in <module> from .model_zoo import get_model File "C:\Users\19535\LivePortrait\src\utils\dependencies\insightface\model_zoo\model_zoo.py", line 11, in <module> from .arcface_onnx import * File "C:\Users\19535\LivePortrait\src\utils\dependencies\insightface\model_zoo\arcface_onnx.py", line 10, in <module> import onnx File "C:\Users\19535\anaconda3\Lib\site-packages\onnx\__init__.py", line 77, in <module> from onnx.onnx_cpp2py_export import ONNX_ML ImportError: DLL load failed while importing onnx_cpp2py_export: 动态链接库(DLL)初始化例程失败。
如果打开命令行,直接切换到LivePortrait目录再执行 python inference.py 会出现: 第一次在命令行直接执行python inference.py失败后,在网上查到了一个解决方案: conda install -c conda-forge onnx来源 然后再次执行python inference.py 显示 然后使用pip install pykalman
安装后再次执行 >python inference.py
显示ImportError: The FFMPEG
plugin is not installed. Use pip install imageio[ffmpeg]
to install it.
再次使用pip install imageio[ffmpeg]成功安装后,
再次执行 python inference.py,这次居然成功了,代码如下: 虽然【最后,日志显示您的脚本执行成功,生成了动画视频文件 animations/s0--d0.mp4 和 animations/s0--d0_concat.mp4。这表明即使有 CUDA 相关的警告,您的脚本仍然能够完成其任务。如果您的任务对性能要求不高,或者您不打算在 GPU 上运行深度学习模型,这个警告可能不会影响您的使用。但是,如果您需要 GPU 加速,您应该按照上述步骤检查和解决问题。(kimi分析)】最后成功生成视频,在文件夹LivePortrait子文件夹animations可以看见
@Tinaisok 感谢你提供解决方案,我们为此类问题加上一个标签供其他用户查阅。
Thank you for providing a solution. We have added a tag to this type of issue for reference by other users.
可是onnx我在requirements.txt上就已经下载了呀
conda install conda-forge::vs2015_runtime
Try to add this runtime, it worked for me
安装readme文档执行部署,执行第三部 python inference.py 时最开始几秒还能load,然后就出现以下问题,下载了 Dependency Walke软件对dll文件C:\Users\19535\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll进行了分析,结果是 Error: At least one required implicit or forwarded dependency was not found. Error: A circular dependency was detected. Warning: At least one delay-load dependency module was not found. Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module. 恳请开发人员帮忙提提建议,如何解决这个问题。 第三部分代码执行结果: (LivePortrait) C:\Users\19535\LivePortrait>python inference.py [05:13:17] Load appearance_feature_extractor from live_portrait_wrapper.py:46 C:\Users\19535\LivePortrait\pretrained_weights\liveportrait\base_models \appearance_feature_extractor.pth done. Load motion_extractor from live_portrait_wrapper.py:49 C:\Users\19535\LivePortrait\pretrained_weights\liveportrait\base_models \motion_extractor.pth done. Load warping_module from live_portrait_wrapper.py:52 C:\Users\19535\LivePortrait\pretrained_weights\liveportrait\base_models \warping_module.pth done. [05:13:18] Load spade_generator from live_portrait_wrapper.py:55 C:\Users\19535\LivePortrait\pretrained_weights\liveportrait\base_models \spade_generator.pth done. Load stitching_retargeting_module from live_portrait_wrapper.py:59 C:\Users\19535\LivePortrait\pretrained_weights\liveportrait\retargeting _models\stitching_retargeting_module.pth done. 2024-08-14 05:13:18.8173059 [E:onnxruntime:Default, provider_bridge_ort.cc:1744 onnxruntime::TryGetProviderInfo_CUDA] C:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1426 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Users\19535\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
EP Error EP Error C:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:866 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported. when using ['CUDAExecutionProvider'] Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
2024-08-14 05:13:18.8551378 [E:onnxruntime:Default, provider_bridge_ort.cc:1744 onnxruntime::TryGetProviderInfo_CUDA] C:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1426 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Users\19535\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
Traceback (most recent call last): File "C:\Users\19535\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\Users\19535\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) RuntimeError: C:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:866 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "C:\Users\19535\LivePortrait\inference.py", line 65, in
main()
File "C:\Users\19535\LivePortrait\inference.py", line 55, in main
live_portrait_pipeline = LivePortraitPipeline(
File "C:\Users\19535\LivePortrait\src\live_portrait_pipeline.py", line 39, in init
self.cropper: Cropper = Cropper(crop_cfg=crop_cfg)
File "C:\Users\19535\LivePortrait\src\utils\cropper.py", line 63, in init
self.face_analysis_wrapper = FaceAnalysisDIY(
File "C:\Users\19535\LivePortrait\src\utils\face_analysis_diy.py", line 37, in init
super().init(name=name, root=root, allowed_modules=allowed_modules, kwargs)
File "C:\Users\19535\LivePortrait\src\utils\dependencies\insightface\app\face_analysis.py", line 33, in init
model = model_zoo.get_model(onnx_file, kwargs)
File "C:\Users\19535\LivePortrait\src\utils\dependencies\insightface\model_zoo\model_zoo.py", line 96, in get_model
model = router.get_model(providers=providers, provider_options=provider_options)
File "C:\Users\19535\LivePortrait\src\utils\dependencies\insightface\model_zoo\model_zoo.py", line 40, in get_model
session = PickableInferenceSession(self.onnx_file, kwargs)
File "C:\Users\19535\LivePortrait\src\utils\dependencies\insightface\model_zoo\model_zoo.py", line 25, in init
super().init(model_path, kwargs)
File "C:\Users\19535\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in init
raise fallback_error from e
File "C:\Users\19535\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in init
self._create_inference_session(self._fallback_providers, None)
File "C:\Users\19535\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: C:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:866 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.