fastai / fastai2

Temporary home for fastai v2 while it's being developed
https://dev.fast.ai
Apache License 2.0
645 stars 235 forks source link

Don't turn layernorm into fp16 #533

Closed richarddwang closed 4 years ago

richarddwang commented 4 years ago

Currently we don't turn batch norm into fp16 when using mixed precision. https://github.com/fastai/fastai2/blob/8d798c881c1eda564bdf92079bdfe43b43525767/fastai2/fp16_utils.py#L61-L71

Seems that for the same reason we will also don't want layernorm be turned into fp16. image

But if I read the code correctly, we turn layernorm into fp16 when using to_fp16. I tried to fix it, but I don't know how to make a fp32 layernom consume fp16 inputs and output in fp16. Could you please fix it ?

Thanks in advance.

jph00 commented 4 years ago

Probably best of using to_native_fp16 to handle this for now. If you figure out a solution, feel free to send in a PR.

richarddwang commented 4 years ago

Got it, thanks !