Closed yvielcastillejos closed 5 months ago
In line 399-400 of https://github.com/mit-han-lab/efficientvit/blob/master/efficientvit/models/nn/ops.py ,
it seems that we convert back to FP32 during training. Why would this be the case? Does training that part with FP16 significantly cause low accuracy?
checkout #15
In line 399-400 of https://github.com/mit-han-lab/efficientvit/blob/master/efficientvit/models/nn/ops.py ,
it seems that we convert back to FP32 during training. Why would this be the case? Does training that part with FP16 significantly cause low accuracy?