yuunnn-w / RWKV_Pytorch

This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation is overly complex and lacks extensibility. Let's join the flexible PyTorch ecosystem and open-source it together!
GNU General Public License v3.0
103 stars 7 forks source link

关于在香橙派上部署的一些问题 #40

Open 00ffcc opened 1 month ago

00ffcc commented 1 month ago

我还真的试了一下在香橙派ai pro 16G上推理,有以下问题:

  1. 香橙派不支持bf16,只能用fp16和fp32
  2. fp16会nan, 要每隔6层把x/2, 然后attention用fp32
00ffcc commented 1 month ago

可以看一下这里https://gitee.com/guizhiyu/rwkv_ascend

uniartisan commented 1 month ago

For RWKV, the time state is very sensitive, so fp16 can cause numerical overflow. Your approach is correct, but it will make the inference results different from what is expected.

For time-related calculations, due to exponential decay, precision is lost as values approach 0, rather than overflowing. Therefore we need to use fp32 for Time-related calculations.

FP16 overflow is addressed by x/2 every 6 layers.

However, this may have huge differences from original, therefore we have no plan to support it now.