BlinkDL / RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Apache License 2.0
12.32k stars 838 forks source link

Multi-Modal in the future? #135

Closed yhyu13 closed 1 year ago

yhyu13 commented 1 year ago

What is the RoadMap of RWKV in the near future? Would it remain as a chat language modal and focus on improving its emergent behavior, alignment, and accuracy, or would it evolve into multi modality?

BlinkDL commented 1 year ago

multi-lang first: https://huggingface.co/BlinkDL/rwkv-4-world yeah will add multimodal too