RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Apache License 2.0
12.48k
stars
848
forks
source link
Cannot train a added module using RWKV_GPT in RWKV-4/src/model_run.py #162
When I add a module to the network RWKV_GPT in RWKV-4/src/model_run.py , the newly added module cannot join the back propagation(The parameter is not updated at all). Is it caused by the WKV(torch.autograd.Function) ? And how can I fix this?
When I add a module to the network RWKV_GPT in RWKV-4/src/model_run.py , the newly added module cannot join the back propagation(The parameter is not updated at all). Is it caused by the WKV(torch.autograd.Function) ? And how can I fix this?