harrisonvanderbyl / rwkv-cpp-accelerated

A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependencies
MIT License
303 stars 19 forks source link

RWKV Cuda Support me on Patreon

This is a super simple c++/cuda implementation of rwkv with no pytorch/libtorch dependencies.

included is a simple example of how to use in both c++ and python.

This is dead, try RWKV.CUH or RWKV.HPP or RWKV.CPP

This is also V4 only

Features

Roadmap

Run example app

1) go to the actions tab 2) find a green checkmark for your platform 3) download the executable 4) download or convert a model (downloads here) 5) place the model.bin file in the same place as the executable 6) run the executable

Build Instructions

Build librwkv_cuda.a

In the top of the source directory

mkdir build
cd build
cmake ..
cmake --build . --config Release

Build example storygen on Linux/windows

Make sure you already installed CUDA Toolkit / HIP development tools / Vulkan development tools

# in example/storygen
build.sh # Linux/nvidia
build.bat # Windows/nvidia
amd.sh # Linux/Amd
vulkan.sh # Linux/Vulkan(all)

You can find executable at build/storygen[.exe] that can be run from the build directory. It expects a 'model.bin' file at the converter folder. See the following note on downloading and converting the RWKV 4 models.

$ cd build 
$ ./storygen

Convert the model into the format

You can download the weights of the model here: https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main

For conversion to a .bin model you can choose between 2 options:

GUI option

Make sure you have python + torch, tkinter, tqdm and Ninja packages installed.

> cd converter
> python3 convert_model.py

CLI option

Make sure you have python + torch, tqdm and Ninja packages installed.

> cd converter
> python3 convert_model.py your_downloaded_model.pth

C++ tokenizer came from this project: https://github.com/gf712/gpt2-cpp/