RWKV / rwkv.cpp

INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
MIT License
1.4k stars 92 forks source link

Crash on an `endbr64` instruction. #24

Open RnMss opened 1 year ago

RnMss commented 1 year ago

My build crashes inferencing with a model with "Illegal Instruction". I debugged it and seems to crash on an endbr64 instruction. I think my CPU doesn't support the instruction set. Is there a building option to turn off the instruction set?

Version: Master, commit e84c446d9533dabef2d8d60735d5924db63362ff

Command to reproduce python rwkv/chat_with_bot.py ../models/xxxxxxx.bin

It crashed with "Illegal Instruction"

I debugged the program:

> gdb python 
(gdb) handle SIGILL stop
(gdb) run rwkv/chat_with_bot.py ../models/xxxx.bin
...
[New Thread 0x7fff6fa49640 (LWP 738136)]
Loading 20B tokenizer
System info: AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | 
Loading RWKV model

Thread 1 "python" received signal SIGILL, Illegal instruction.
0x00007fffde693135 in ggml_init () from /*****/rwkv.cpp/librwkv.so
(gdb) disassemble
Dump of assembler code for function ggml_init:
   0x00007fffde692fd0 <+0>: endbr64 
   0x00007fffde692fd4 <+4>: push   %r15
   0x00007fffde692fd6 <+6>: mov    $0x1,%eax
   0x00007fffde692fdb <+11>:    push   %r14
...
saharNooby commented 1 year ago

Hi! Please try to build and run llama.cpp and see if it works.

If it crashes too with similar error, report the problem with llama.cpp to their repo. They would fix it quicker, since their repo is more popular, and then I can port the fix here.

If it does not crash, we would need to compare the code of llama.cpp and rwkv.cpp and guess what can cause the issue.

RnMss commented 1 year ago

I tried llama.cpp, and it worked without a crash. Tested on models: opt-1.3b and Chinese-Alpaca-LoRA-13B llama.cpp version: master-53dbba7

saharNooby commented 1 year ago

I took a look at llama.cpp version of ggml. Unfortunately, my and their repo are now too diverged to make sense of any comparisons. Sorry for asking you to test llama.cpp, I'll stop asking users to do that from now on.

As for the issue, I don't have any ideas how to fix this.

RnMss commented 1 year ago

I tried adding compile flags -fcf-protection=none, which is said to disable the CET instruction set like endbr64, but it does not help.

It doesn't make sense. I roughly read the code but didn't see anything close to that. The disassembly looks rather real, not like some random data. I'm dooooomed.

saharNooby commented 1 year ago

@RnMss I've updated ggml to the latest version. Please try again, don't forget to update git submodules (or better -- clone from scratch git clone --recursive https://github.com/saharNooby/rwkv.cpp.git).

RnMss commented 1 year ago

It still doest not work on my CPU. I'll try on Windows later.

Model Tested: https://huggingface.co/BlinkDL/rwkv-4-raven/blob/main/RWKV-4-Raven-14B-v8-Eng87%25-Chn10%25-Jpn1%25-Other2%25-20230412-ctx4096.pth

EricLeeaaaaa commented 1 year ago

Got the same problem in docker nvcr.io/nvidia/pytorch:23.05-py3, tokenizers-0.13.3

izzatzr commented 11 months ago

try recompile the repo with disable the AVX instruction flag on cmakelist.txt @RnMss . this step works for me