BlinkDL / RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Apache License 2.0
11.99k stars 825 forks source link

[RWKV-v5] use register_buffer instead of frozen params #213

Open kashif opened 6 months ago

kashif commented 6 months ago

also fixed a bug due to this in MishGLU

BlinkDL commented 6 months ago

these parameters are trainable so we use nn.Parameter

kashif commented 6 months ago

ah ok then what does the with torch.no_grad(): do?

kashif commented 6 months ago

@BlinkDL happy to close this PR or should i remove the torch.no_grad()?