clMathLibraries / clSPARSE

a software library containing Sparse functions written in OpenCL
Apache License 2.0
173 stars 61 forks source link

Adding Compensated Summation to SpM-DV #121

Closed jlgreathouse closed 9 years ago

jlgreathouse commented 9 years ago

This patch is an overarching solution to both Issue #109 as well as a feature enhancement that allows users to optionally compute SpM-DV using compensated summation.

Compensated summation is a technique for reducing the cumulative error that occurs during floating point summation. Such errors occur both because of differences in magnitude between the summands (where part or all of the smaller one may be rounded out) as well as due to catastrophic cancellation, where subtracting a large value from a sum will cause previously marginal values that were rounded out to become significant (and now wrong).

It does this by, in part, replacing simple floating point summation with an error-free transformation, such as 2Sum or Fast2Sum. These equations essentially perform the equation A + B = (Sum, Error).

The Sum2 algorithm uses this transformation to greatly reduce the final error in a summation. As you continue to add values into sum, you continue to separately add the errors together. At the end of the summation, you add the final sum value and the final error value together. The end result is that your final answer is as accurate (0 ULP) as performing the summation in 2x precision and then casting down to your original precision. In other words, if all of your values to sum are floats, your answer should be exactly the same as if you'd performed the summation using doubles and then cast the final answer as a float.

This code uses a parallel version of this algorithm, PSum2. This compensated summation algorithm was added to both csrmv_general and csrmv_adaptive. test-blas2 and clsparse-bench were both modified to allow the command line option to use compensated summation.

When running test-blas2 with compensated summation enabled, all double-precision tests pass with 0 ULP difference compared to a CPU-side algorithm performing the same algorithm. The single precision tests sometimes differ compared to the CPU calculations performed in double precision. This is because, on my test platforms, the AMD OpenCL runtime does not allow single precision denormals. Instead, it rounds all denorms to zero, causing a large error that we can't compensate.

While this replaces all of the additions in SpM-DV with many more FLOPs, the overhead of compensated summation is, in most cases, modest. On a FirePro W9100, the average single-precision slowdown of csrmv_adaptive is 5%, and the average double-precision slowdown is 2%. On a Radeon R9 Fury X (with a DPFP rate of 1/16 the SPFP rate, but with much higher bandwidth than the FirePro W9100), the slowdown is 10% for single-precision and 28% for double-precision.

image (Note that the above graphic is not a comparison between the relative performance of the two cards. '1' in each bar is the performance in that category with traditional summation.)

Note that we do not perform a fully compensated dot product, as described in the Yamanaka, Ogita, Rump, and Oishi paper. Performing an error-free transformation on the products in our SpM-DV is not extremely computationally intensive, but carrying around the extra error values from the multiplications is burdensome in csrmv_adaptive. We've seen that the majority of error in our calculations comes from summation, so we deemed that good enough. Nonetheless, this means that we can't fully guarantee a 0 ULP accuracy compared to 2x precision calculations, because the error due to multiplies may compound.

kknox commented 9 years ago

@jlgreathouse Give me an hour face2face tomorrow so that we get this PR merged in. I would like to give you quick training/demonstration on interactive rebasing to squash the number of commits you have to a set of 'core' commits that makes sense. I like to try to keep the repository commit history relatively simple, and rebasing is the tool to use.