RWKV / rwkv.cpp

INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
MIT License
1.37k stars 90 forks source link

conversion seems to only support float16/float32, not quantized formats. #77

Closed cmdicely closed 1 year ago

cmdicely commented 1 year ago

On Windows 11 installed per instructions, conversion seems to only support float16/float32, not quantized formats.

~\src\rwkv.cpp> python rwkv\convert_pytorch_to_ggml.py RWKV-4-Raven-14B-v12-Eng98%-Other2%-20230523-ctx8192.pth Q8_0_RWKV-4-Raven-14B-v12.bin Q8_0
usage: convert_pytorch_to_ggml.py [-h] src_path dest_path {float16,float32}
convert_pytorch_to_ggml.py: error: argument data_type: invalid choice: ‘Q8_0’ (choose from ‘float16’, ‘float32’)
saharNooby commented 1 year ago

It is intended. First, you convert PyTorch pth to bin to store model weights as-is. And then, optionally, you quantize the bin file into some other format.

Conversion and quantization stages are split to allow quantization on lower RAM devices. If quantization always required a PyTorch file, it would need to completely read it to RAM, which may not be possible for bigger models.