BlinkDL / RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Apache License 2.0
12.32k stars 838 forks source link

How to construct dataset for multi-round dialogue? any examples? #106

Closed hitxueliang closed 1 year ago

hitxueliang commented 1 year ago

Can anyone give some data examples for multi-round dialogue. MOSS and ChatGLM have clear examples, but I can not find examples in RWKV

BlinkDL commented 1 year ago

see https://github.com/BlinkDL/ChatRWKV and search for "When you build a RWKV chatbot"

gg22mm commented 1 year ago

https://github.com/BlinkDL/ChatRWKV/issues/118