Open jgsimard opened 3 years ago
I don't have plan to support fp16 training. This repo was developed a while ago. It is hard to extend it for mixed precision training, especially for the cuda kernels.
I would recommend using NVIDIA Apex for fp16 training. It is very easy to use and can be easily adapted into your own code.
Are there plans to enable mixed precision training? Thanks