Open upsj opened 8 months ago
Is there any reason to avoid propagation of NaN? If there's no performance penalty, I think propagation of NaN is easier to know the algorithm does not work out due to some arthmetic error.
@yhmtsai if you are computing for example y = 0.5 * A * x + 0.0 * y
then propagating NaNs from y
is unnecessary, since it's mathematical equivalent to just leaving y
out. This can easily happen, if y
is not initialized before that computation.
no, 0 NaN should be NaN not zero, so it is not mathimatical equality by just leaving them out. Yes, it might happen in unitialized memory, but I would say it should be properly initialized or using the call without touching the unitialization put. (for us, we may proparbably provide A->apply(alpha, x, y) for y = alpha A * x)
I know current vendor library usually treat 0 as do not touch due to BLAS. I am not sure the other routines hold the same rule
0 * NaN should be NaN not zero
that makes calculations more fragile, we already do similar things (special cases) for zeros inside our solver kernels
@yhmtsai we treat 0 * NaN = 0
only in the case of axpy-like operation. So there will still be NaN propagation in normal SpMV, dot-products, etc. But for these axpy-like operations, users will not care about the IEEE standard. For them, initializing x
by setting it to 0 and multiplying it with 0 should be equivalent.
Operations like $\alpha A x + 0 y$ may propagate NaNs from y to the output despite the 0 coefficient. This can be avoided by checking the beta scaling factors for zero explicitly.
TODO: