RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Apache License 2.0
11.99k
stars
825
forks
source link
How to understand u vector in the origin paper? #244
As it said in the origin paper: "Token Shift allows the model to learn how much new versus old information should be allocated per time step to each channel of receptance, key, value, and gate vectors (r , k, v, and g respectively) independently and uniquely for each head. "
As it said in the origin paper: "Token Shift allows the model to learn how much new versus old information should be allocated per time step to each channel of receptance, key, value, and gate vectors (r , k, v, and g respectively) independently and uniquely for each head. "
How to understand 'Token Shift' here?