apache / mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
https://mxnet.apache.org
Apache License 2.0
20.77k stars 6.79k forks source link

[RFC] Denormal floating point values handling #19361

Open grygielski opened 3 years ago

grygielski commented 3 years ago

Problem statement

Currently in MXNet there is no mechanism of handling denormal floating point values (wikipedia) of parameters/inputs/outputs. Such numbers are problematic in terms of computations because adding/multiplying them require more CPU instructions than normal floating point numbers. However, they are so close to zero (e.g. ~1e-30) that most of the times they can be rounded to 0 without any lose in model's accuracy.

It can be done simply by checking every single parameter of the model with some, small threshold and rounding all parameters below this threshold to 0. It adds some overhead to saving/loading parameters and it's not perfect because denormal values can be created during inference on input/output values too.

Cleaner solution would to to use hardware features of modern CPUs. Since SSE2 extension there are CPU flags that handle denormals automatically. These flags are DAZ (denormals-are-zero) and FTZ (flush-to-zero). They can be set inside C++ code using intrinsic instructions.

Important point is that denormal values are rather rare since most modern NN architectures do not work asymptotically close to 0. However it can happen that they will show up in RNN models (because of sigmoid gate activation) or when using layers like PReLU (https://github.com/apache/incubator-mxnet/issues/19218).

My question here is what is a way of handling such cases preferred by a community? I would love to hear your suggestions and opinions about proposed solutions.

Proposed solutions

github-actions[bot] commented 3 years ago

Welcome to Apache MXNet (incubating)! We are on a mission to democratize AI, and we are glad that you are contributing to it by opening this issue. Please make sure to include all the relevant context, and one of the @apache/mxnet-committers will be here shortly. If you are interested in contributing to our project, let us know! Also, be sure to check out our guide on contributing to MXNet and our development guides wiki.

TaoLv commented 3 years ago

@pengzhao-intel @mgouicem could you please help to review the proposal? Many thanks!

pengzhao-intel commented 3 years ago

It will be more convenient for setting FAZ to true by default. The only concern is that if it affects the training accuracy (suppose very limited).

We have encountered several performance issues with denormal computation in the past but only happen in the user's debugging mode by randomly generated numbers. Thus, I am not sure if this issue will be happening in real cases.

Let's wait for a while for the inputs from other members :)

mgouicem commented 3 years ago

Thanks @grygielski for the proposal. I definitely agree with the premise of this proposal: most users do not know/care about denormals and they just get in the way of good performance for some use cases.

For ease of use, I would encourage disabling denormals by default and go for option 2 or 4 (so set FTZ and DAZ), since the users that need denormals for accuracy usually know about denormals in the first place, whereas for the general users, denormals will likely not make any difference for accuracy but will impact performance.

I have no opinion on which one is the best for code simplicity/maintenance though, so I let the MXNet contributors further comment on that.

TaoLv commented 3 years ago

Thanks for your comments, @mgouicem! @szha @leezu could you please help to review? If we want to address this on framework level, probably we need to clearly define the behaviors on different hardware platforms.

szha commented 3 years ago

I'd like to see if there are real use cases where denormal floats are legitimate use cases. @xidulu @szhengac is there any known cases where such precision is needed?

xidulu commented 3 years ago

@szha In gluon.distribution, floating number of very small value are often clipped to a minimum value to avoid numerical issue in downstream tasks, e.g. https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/probability/distributions/utils.py#L164-L172

The clip is very necessary, otherwise tons of NaN would come up when very small value are fed into OPs like log.

grygielski commented 3 years ago

@xidulu Thanks a lot for your user experience comment. In this case, using np.finfo('float32').eps returns the machine epsilon which is far from denormal number. Therefore, these 2 flags shouldn't affect your clipping. To create denormal number from machine epsilon you have to take it to ~6th power.

import daz
daz.set_ftz()
daz.set_daz()

np.power(np.finfo('float32').eps, 5, dtype=np.float32)
>>> 2.4074124e-35

np.power(np.finfo('float32').eps, 6, dtype=np.float32)
>>> 0.0
szha commented 3 years ago

Based on the discussion, I think the combined approach for dealing with denormal floats sounds reasonable. @grygielski thanks for the proposal