lazydog28 / mc_auto_boss

鸣潮后台自动刷BOSS声骸
675 stars 66 forks source link

CUDA12.0运行本程序经验分享 #125

Open BlueSkyXN opened 3 months ago

BlueSkyXN commented 3 months ago

CUDA12.0运行本程序

环境

不需要Conda,使用原生环境

测试设备 NV RTX4080,Windows11PRO23H2,Python3.10.11

测试人 @BlueSkyXN

操作流程

  1. 首先用git clone下载仓库内容

  2. 修改 requirements.txt,去掉onnxruntime-gpu和paddlepaddle-gpu

  3. 在Windows应用安装中卸载全部名字带CUDA的

  4. 在NV官网下载CUDA安装包,地址是 https://developer.nvidia.com/cuda-12-0-0-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exe_local

  5. 安装CUDA12,全部默认,除了安装内容仅勾选CUDA,不需要覆盖驱动啥的(自定义,非精简)

  6. 下载CUDNN-8.9.7.29-CUDA12包,地址是 https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/windows-x86_64/cudnn-windows-x86_64-8.9.7.29_cuda12-archive.zip 下载后解压,并把它的BIN目录加到环境变量Path中即可,无需重启

  7. 用这个命令安装 paddlepaddle-gpu ,命令是 python -m pip install paddlepaddle-gpu==2.6.1.post120 -f https://www.paddlepaddle.org.cn/whl/windows/mkl/avx/stable.html

  8. 验证paddle的安装,命令是 先输入python,再输入paddle.utils.run_check() 如果没有报错就是成功了

  9. 用这个命令安装 onnxruntime-gpu ,命令是 pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/ 注意这个是境外的微软源,可能网速不咋地,可开魔法。 https://github.com/lazydog28/mc_auto_boss/issues/125#issuecomment-2285165430

  10. 然后 pip install -r requirements.txt 即可把其他依赖装上,建议全局镜像源,或者这时用-i指定 -i https://pypi.tuna.tsinghua.edu.cn/simple 推荐清华或者腾讯的,我一般用腾讯的,别用阿里,今晚试了下很慢。

  11. 然后改配置文件 config.yaml ,注意路径,比如我是wegame和官服同时玩的,官服配置文件我锁了120FPS,所以我在wegame上运行

  12. 启动游戏后,不要把鸣潮游戏窗口最小化、修改配置文件、双屏用户鼠标会锁定在主屏幕,修改游戏设置尤其是镜头重置要打开

  13. 然后运行python background/main.py 注意路径要在仓库路径

  14. 升级程序的话,把config.yaml复制走其他不动

BlueSkyXN commented 3 months ago

不会提交GitHub wiki,就这样吧

chryssss commented 3 months ago

8.验证paddle的安装,命令是 先输入python,再输入paddle.utils.run_check() 如果没有报错就是成功了

这里大家别忘了输入Python之后import paddle(

HugoLau91 commented 2 months ago

楼上是对的,官方教程:使用 python 进入 python 解释器,输入import paddle ,再输入 paddle.utils.run_check(),如果出现PaddlePaddle is installed successfully!,说明您已成功安装。

BlueSkyXN commented 1 month ago

本教程适用的版本 https://github.com/lazydog28/mc_auto_boss/commit/7e44d41b1f20197843553100cf2432e8c72b83d8

CUDA是12.0

base) PS G:\Github\mc_auto_boss.wiki> nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Mon_Oct_24_19:40:05_Pacific_Daylight_Time_2022
Cuda compilation tools, release 12.0, V12.0.76
Build cuda_12.0.r12.0/compiler.31968024_0

另外原文的onnxruntime-gpu安装命令有变化,因为1.8.1实测报错 可用的版本是1.8.0 安装命令

pip install onnxruntime-gpu==1.18.0 --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/

如果是1.8.1会报错 如下

(base) PS G:\Github\mc_auto_boss2> python background/main.py
2024-08-13 07:42:32.7557461 [E:onnxruntime:Default, provider_bridge_ort.cc:1745 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1426 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\ProgramData\anaconda3\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*************** EP Error ***************
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.
 when using ['CUDAExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2024-08-13 07:42:32.7786336 [E:onnxruntime:Default, provider_bridge_ort.cc:1745 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1426 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\ProgramData\anaconda3\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

Traceback (most recent call last):
  File "C:\ProgramData\anaconda3\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "C:\ProgramData\anaconda3\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "G:\Github\mc_auto_boss2\background\main.py", line 12, in <module>
    from task import boss_task, synthesis_task, echo_bag_lock_task
  File "G:\Github\mc_auto_boss2\background\task\__init__.py", line 9, in <module>
    from .pages.general import pages as general_pages
  File "G:\Github\mc_auto_boss2\background\task\pages\__init__.py", line 13, in <module>
    from utils import *
  File "G:\Github\mc_auto_boss2\background\utils.py", line 27, in <module>
    from yolo import search_echoes
  File "G:\Github\mc_auto_boss2\background\yolo.py", line 21, in <module>
    model = rt.InferenceSession(model_path, providers=provider)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ProgramData\anaconda3\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in __init__
    raise fallback_error from e
  File "C:\ProgramData\anaconda3\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in __init__
    self._create_inference_session(self._fallback_providers, None)
  File "C:\ProgramData\anaconda3\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.
BlueSkyXN commented 1 month ago
(base) PS C:\Users\SKY> cd G:\Github\mc_auto_boss2
(base) PS G:\Github\mc_auto_boss2> python background/main.py
未找到游戏窗口
按任意键退出...
(base) PS G:\Github\mc_auto_boss2> nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Fri_Jun_14_16:44:19_Pacific_Daylight_Time_2024
Cuda compilation tools, release 12.6, V12.6.20
Build cuda_12.6.r12.6/compiler.34431801_0
(base) PS G:\Github\mc_auto_boss2>

升级到CUDA12.6,不影响运行