BlinkDL / RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Apache License 2.0
12.05k stars 827 forks source link

怎样使用lora+alpaca的代码式样训练rwkv的指令微调? #143

Closed ylinlinz closed 1 year ago

ylinlinz commented 1 year ago

如果使用RwkvForCausalLM.from_pretrained() 加载模型,会报显存错误(v100,32G显存),请问怎么解决?

BlinkDL commented 1 year ago

Please use https://github.com/Blealtan/RWKV-LM-LoRA