RWKV / rwkv.cpp

INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
MIT License
1.13k stars 82 forks source link

Repetitive, meaningless tokens output #123

Closed hhjin closed 11 months ago

hhjin commented 11 months ago

I successfully compiled and converted the quantization model on Mac M1. But when I use chat_with_bot.py, the output is meaningless words or a lot of repeated characters. I tried with the same situation on a Windows machine. Anything wrong? Is it related to the model? I downloaded the latest RWKV World 4, and different models have this problem.


$ python rwkv/chat_with_bot.py rwkv-cpp-world-1.5B-q8_0.bin 
Loading 20B tokenizer
System info: AVX=0 AVX2=0 AVX512=0 FMA=0 NEON=1 ARM_FMA=1 F16C=0 FP16_VA=1 WASM_SIMD=0 BLAS=1 SSE3=0 VSX=0
Loading RWKV model
Processing 185 prompt tokens, may take a while
Processed in 4 s, 25 ms per token

Chat initialized! Your name is User. Write something and press Enter. Use \n to add line breaks to your message.
> User: hi
> Bot:\shipNacterf :8eered fro0/:|436(:8.F2_
> User: > hello
> Bot:\,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@ dealareshipshipshipshipshipshipshipshipshipshipshipshipshipshipshipshipconareshipconareshipconareshipconareshipconareshipconareshipconareshipconareshipconareshipconareshipconareshipconareshipconareshipconareshipconareshipconareshipconareshipconareshipconareshipconareshipconarerf
 Orf
 Orf
 Orf
 Orf
 Orf

python rwkv/generate_completions.py rwkv-cpp-3B-CN-q8_0.bin                  
Loading 20B tokenizer
System info: AVX=0 AVX2=0 AVX512=0 FMA=0 NEON=1 ARM_FMA=1 F16C=0 FP16_VA=1 WASM_SIMD=0 BLAS=1 SSE3=0 VSX=0
Loading RWKV model
94 tokens in prompt

--- Generation 0 ---

# rwkv.cpp

This is a port of [BlinkDL/RWKV-LM](https://github.com/BlinkDL/RWKV-LM) to [ggerganov/ggml](https://github.com/ggerganov/ggml).

Besides usual **FP32**, it supports **FP16** and **quantized INT4** inference on CPU. This project is **CPU only**.[I@::@:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::]

Took 4.955 sec, 49 ms per token

--- Generation 1 ---

# rwkv.cpp

This is a port of [BlinkDL/RWKV-LM](https://github.com/BlinkDL/RWKV-LM) to [ggerganov/ggml](https:github.com/ggerganov/ggml).

Besides usual **FP32**, it supports **FP16** and **quantized INT4** inference on CPU. This project is **CPU only**. [�@(.�8�:�!�/�@0@0�.�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�!�]

Took 4.915 sec, 49 ms per token
schamane commented 11 months ago

Looks like you use wrong tokenizer... Or wrong model. Try with raven. Also use bigger one, maybe 3B

saharNooby commented 11 months ago

Hi!

You are using RWKV World model, which uses world tokenizer. By default, generate_completions.py/ chat_with_bot.py uses 20B tokenizer, which will give garbage output when using it with the RWKV World model.

You need to explicitly specify world tokenizer when running the script:

python rwkv/chat_with_bot.py rwkv-cpp-world-1.5B-q8_0.bin world


Or wrong model. Try with raven. Also use bigger one, maybe 3B

From my experience, even 1B5 models are fluent and (when used with the correct tokenizer) generate okay texts.

hhjin commented 11 months ago

Thanks. I can get desired output after adding the tokenizer type. The 7B world cpp-q8_0 model runs with about 10 tokens/sec speed on my 16GB M1 mac book.


python rwkv/generate_completions.py  rwkv-cpp-readflow-7B-ctx32k-q8_0.bin  world
Loading world tokenizer
System info: AVX=0 AVX2=0 AVX512=0 FMA=0 NEON=1 ARM_FMA=1 F16C=0 FP16_VA=1 WASM_SIMD=0 BLAS=1 SSE3=0 VSX=0
Loading RWKV model
91 tokens in prompt

--- Generation 0 ---

#  rwkv.cpp

This is a port of [BlinkDL/RWKV-LM](https://github.com/BlinkDL/RWKV-LM) to [ggerganov/ggml](https://github.com/ggerganov/ggml).

Besides usual **FP32**, it supports **FP16** and **quantized INT4** inference on CPU. This project is **CPU only**.[

# Example

```cpp
#include "ggml.h"
#include "common.h"
#include "logger.h"

int main()
{
    // Load model
    model::Ptr model = model::load("E:/workspace/ggml/examples/yolo/yolo.onnx");
    // Convert input to float
    real input_data = 1.0;
    real input_data_f = model->input_to_float]

Took 9.785 sec, 97 ms per token