RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Can you create a place, for example the Issues tab of a specified repo, where people can report what prompts they have tried, but failed to get an expected result, which other llms like chatgpt4 can provide?
Sharing failed attempts can save other people's time by avoiding the same attempts. And maybe people can discuss how to fix the prompts for rwkv to get a better result.
Also, we can share the successful prompts, just like civitai.com's image gallery. Successful prompts can provide experiences for new comers.
These records should be sorted by which models and strategies they used.
Can you create a place, for example the Issues tab of a specified repo, where people can report what prompts they have tried, but failed to get an expected result, which other llms like chatgpt4 can provide?
For example, I have tried the prompts of https://github.com/theseamusjames/gpt3-python-maze-solver on rwkv, but failed.
Sharing failed attempts can save other people's time by avoiding the same attempts. And maybe people can discuss how to fix the prompts for rwkv to get a better result.
Also, we can share the successful prompts, just like civitai.com's image gallery. Successful prompts can provide experiences for new comers.
These records should be sorted by which models and strategies they used.