Closed Bills135 closed 1 year ago
支持linux,安装python依赖并编译可执行程序后直接可用,我还没写说明文档
近期会更新相关安装说明
感謝等好消息
非常期待有安装说明!谢谢
@Bills135 @Karen0103 从以上链接下载linux程序后 参考此处说明执行命令,安装依赖:https://github.com/josStorer/RWKV-Runner/blob/master/build/linux/Readme_Install.txt, 即可直接使用 如果需要在无gui的环境下使用,安装依赖后,直接执行python3 main.py即可 调用http://127.0.0.1:8000/switch-model 传入使用的模型和配置进行读取
后续我会进一步给出服务器部署配合chatgpt应用使用的示例
能从源码安装吗?没有configure文件
目前没写configure,你需要自己安装 wails: https://wails.io/docs/gettingstarted/installation
你好,好像看到linux的使用教程还没有更新完成,正在在尝试部署,可以帮忙给个指示吗?https://github.com/josStorer/RWKV-Runner/blob/master/build/linux/Readme_Install.txt
@baofengqqwwff gui按照这个说明安装依赖后即可使用
有办法用python的方式启动webui吗?直接用可执行文件启动会报错Gtk-WARNING **: 16:57:37.766: cannot open display:
@baofengqqwwff 后续会出说明
@Bills135 @Karen0103 @baofengqqwwff 服务器部署示例脚本,注意脚本中的模型是最小规模的0.1B,且纯cpu执行 https://github.com/josStorer/RWKV-Runner/tree/master/deploy-examples
I wrote an AUR pkg config: https://gist.github.com/BoyanXu/9961a27587984073458d15cfa47a0ab0.
The problem I met is the strict requirement of torch version to be torch-1.13.1+cu117 triggers issue #17 on my machine with torch-2.0.1-2
The program seems to rely on default python3
interpreter at /usr/bin/python
whose package is managed at system level by pacman
. Keeping an outdated PyTorch at system level with a package manager like pacman
is extremely tricky, as that will break the dependency of other packages.
Currently, a workaround, maybe use a virtual environment, and remove the PyTorch dependency from PKGBUILD accordingly. But I wonder what prevented the project to support the latest PyTorch? Will Pytorch 2.0.X be supported in the future?
@BoyanXu Actually, this program does not have a strict requirement for torch version. The requirement mentioned in #17 are limited to Windows because the Windows version has a built-in custom CUDA kernel accelerator. The kernel is compiled under torch-1.13.1+cu117, and you can customize the Python interpreter used in the settings page and use other torch versions. Linux users must compile the CUDA kernel themselves or not use the acceleration. Visit https://github.com/BlinkDL/ChatRWKV to learn about how to compile the kernel.
@BoyanXu
Just tried your gist, for Arch WSL it works fine.
Perhaps you need to disable Use Custom CUDA kernel to Accelerate
in Configs page
I wrote an AUR pkg config: https://gist.github.com/BoyanXu/9961a27587984073458d15cfa47a0ab0.
The problem I met is the strict requirement of torch version to be torch-1.13.1+cu117 triggers issue #17 on my machine with torch-2.0.1-2
The program seems to rely on default
python3
interpreter at/usr/bin/python
whose package is managed at system level bypacman
. Keeping an outdated PyTorch at system level with a package manager likepacman
is extremely tricky, as that will break the dependency of other packages.Currently, a workaround, maybe use a virtual environment, and remove the PyTorch dependency from PKGBUILD accordingly. But I wonder what prevented the project to support the latest PyTorch? Will Pytorch 2.0.X be supported in the future?
➜ rwkv-runner makepkg -si
==> Making package: RWKV-Runner 1.2.0-1 (Fri 14 Jul 2023 09:50:56 PM CST)
==> Checking runtime dependencies...
==> Installing missing dependencies...
[sudo] password for greenhandzdl:
error: target not found: python-sse-starlette
error: target not found: python-gputil
==> ERROR: 'pacman' failed to install missing dependencies.
==> Missing dependencies:
-> python-pytorch
-> python-sse-starlette
-> python-gputil
==> Checking buildtime dependencies...
==> ERROR: Could not resolve all dependencies.
@BoyanXu Actually, this program does not have a strict requirement for torch version. The requirement mentioned in #17 are limited to Windows because the Windows version has a built-in custom CUDA kernel accelerator. The kernel is compiled under torch-1.13.1+cu117, and you can customize the Python interpreter used in the settings page and use other torch versions. Linux users must compile the CUDA kernel themselves or not use the acceleration. Visit https://github.com/BlinkDL/ChatRWKV to learn about how to compile the kernel.
@josStorer 我目前正在尝试在“揽睿星舟”的算力服务器上部署Runner的仅后端,我创建的基础环境是 Linux / PyTorch / official-1.12.1-cuda11.6-cudnn8-devel
,我仍需要进行CUDA的自编译吗?
如果需要,我不确定这个编译动作需要如何操作(即使我已经详细阅读并能独立在BlinkDL的ChatRWKV项目中编译CUDA,但我仍不知道如何将其用于Runner项目)。
@eyaeya 编译是可选的, 在runner的后端推理服务中, 调用 /switch-model
载入模型时, 传入customCuda: true
即可开启自定义cuda算子, 此时会自动使用你安装好的环境进行编译, 你只需要确保正确安装了gcc, ninja, py依赖, cuda库即可
@josStorer 请教,但我在“揽睿星舟”的算力服务器上部署Runner的仅后端之后(未Switch模型),试图用 URL/docs 来验证接口是否运行起来时报错:
sudo apt install python3-dev
git clone https://github.com/josStorer/RWKV-Runner --depth=1
python3 -m pip install torch torchvision torchaudio
python3 -m pip install -r RWKV-Runner/backend-python/requirements.txt
cd RWKV-Runner
python3 ./backend-python/main.py --webui > log.txt &
http://URL:8000/docs
,得到以下提示。当我在Linux Server部署仅后端之后,Switch模型时报错。
nvcc --version
user@lsp-ws:~ /netdisk/data/RWKV-Runner$ python3 ./backend-python/main.py --webui > log.txt &
[1] 7957
user@lsp-ws:~/netdisk/data/RWKV-Runner$ INFO: Started server process [7957]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
user@lsp-ws:~ /netdisk/data/ninja$ curl http://127.0.0.1:8000/switch-model -X POST -H "Content-Type: application/json" -d '{"model":"./models/rwkv_v5.2_7B_role_play_16k.pth","strategy":"cuda fp32","customCuda":"true","deploy":"true"}'
{"detail":"failed to load: CUDA out of memory. Tried to allocate 224.00 MiB (GPU 0; 23.70 GiB total capacity; 21.72 GiB already allocated; 202.56 MiB free; 22.76 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"}
我所参考的服务器端部署方法如下
git clone https://github.com/josStorer/RWKV-Runner --depth=1 python3 -m pip install torch torchvision torchaudio python3 -m pip install -r RWKV-Runner/backend-python/requirements.txt cd RWKV-Runner
python3 ./backend-python/main.py --webui > log.txt &
sudo apt install re2c
git clone http://github.com/ninja-build/ninja
./configure.py --bootstrap
sudo cp ninja /usr/bin/
apt-get install ninja-build
curl http://127.0.0.1:8000/switch-model -X POST -H "Content-Type: application/json" -d '{"model":"./models/rwkv_v5.2_7B_role_play_16k.pth","strategy":"cuda fp32","customCuda":"true","deploy":"true"}'
@eyaeya
> log.txt &
不要这么做, 那只是个示例脚本, 你应该用screen或tmux之类的工具让程序在后台运行cuda fp32
, 这没有意义, 你应该用cuda fp16
, 错误写了CUDA out of memory
, 显存不足apt-get install gcc ninja-build
即可> log.txt &
Don't do it this way; that's just a sample script. You should use tools like screen or tmux to run the program in the background.cuda fp32
; it doesn't make sense. You should use cuda fp16
. The error "CUDA out of memory" indicates insufficient VRAM.apt-get install gcc ninja-build
.谢谢 @josStorer 的深夜回复,我明天将按此再次尝试并反馈,圣诞快乐🎄
> log.txt &
不要这么做, 那只是个示例脚本, 你应该用screen或tmux之类的工具让程序在后台运行- 不要用
cuda fp32
, 这没有意义, 你应该用cuda fp16
, 错误写了CUDA out of memory
, 显存不足- 装了python依然要装python3-dev, 这是用来编译cyac, 以启用state缓存需要的
- 直接
apt-get install gcc ninja-build
即可- cuda算子编译你还得装这个: https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=deb_local, 注意正确选择你的系统版本
以下是运行记录
如需注册该平台,可使用我的邀请码获得优惠券 注册邀请码:1006359338
进入VSCode在线调试界面。
sudo apt install python3-dev
git clone https://github.com/josStorer/RWKV-Runner --depth=1
python3 -m pip install torch torchvision torchaudio
python3 -m pip install -r RWKV-Runner/backend-python/requirements.txt
cd RWKV-Runner/frontend
npm ci
npm run build
cd ..
sudo apt-get install gcc ninja-build
注意:对于揽睿星舟算力平台,需要将端口指向27777,并将host设置为0.0.0.0才可暴露外部访问。
python3 ./backend-python/main.py --port 27777 --host 0.0.0.0 --webui
注意将下方命令中的模型名改为自己所用的。 新建一个终端执行:
curl http://127.0.0.1:27777/switch-model -X POST -H "Content-Type: application/json" -d '{"model":"./models/rwkv_v5.2_7B_role_play_16k.pth","strategy":"cuda fp16","customCuda":"true","deploy":"true"}'
该平台该环境,已自带CUDA算子无需安装以下步骤
lsb_release -a
根据自己的环境选择版本并修改下方的CUDA算子安装方法:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600 wget https://developer.download.nvidia.com/compute/cuda/12.3.1/local_installers/cuda-repo-ubuntu2204-12-3-local_12.3.1-545.23.08-1_amd64.deb sudo dpkg -i cuda-repo-ubuntu2204-12-3-local_12.3.1-545.23.08-1_amd64.deb sudo cp /var/cuda-repo-ubuntu2204-12-3-local/cuda-*-keyring.gpg /usr/share/keyrings/ sudo apt-get update sudo apt-get -y install cuda-toolkit-12-3
sudo apt-get install -y nvidia-kernel-open-545 sudo apt-get install -y cuda-drivers-545
@josStorer 但我仍未解决打开 http://URL:8000/docs 时的报错,不过似乎不影响API调用的使用。
@josStorer 请教:在Linux的(cuda fp16)中,我是否还需要设置 os.environ["RWKV_CUDA_ON"]='1'
以使得编译CUDA更快?如果需要,我应如何设置?
@josStorer 请教:在Linux的(cuda fp16)中,我是否还需要设置
os.environ["RWKV_CUDA_ON"]='1'
以使得编译CUDA更快?如果需要,我应如何设置?
能写一个完整的吗,我觉得这个Linux的安装真的搞得太混乱了
%cd /content/RWKV-Runner/frontend !npm ci !npm run build !npm install -g typescript !npm run build
%cd .. 这一段出错 找不到tsc
Traceback (most recent call last):
File "/content/RWKV-Runner/backend-python/main.py", line 114, in
@eyaeya 不需要os.environ["RWKV_CUDA_ON"]='1'
, 调用/switch-model时, 传入customCuda: true会自动开启
看說明只有 mac / windows 模式是嗎?