maedoc / libtvb

TVB C library
6 stars 8 forks source link

Consider improved error algorithms #135

Closed maedoc closed 8 years ago

maedoc commented 8 years ago

Several common operations can be prone to error, as summation (cf. Kahan summation algorithm), and alternatives are available for other common operations like variance, covariance and so on.

A systematic error analysis of across code would be great but time intensive.

i-Zaak commented 8 years ago

Agree, I've been also thinking about this; we should definitely first try to get some estimates on the errors for particular computations...

maedoc commented 8 years ago

Some operations like summations could be isolated to library function to implement both lower error and possibly simd versions. Summation and weighted summations are low hanging fruit here.

AbheekG commented 8 years ago

@maedoc If I am thinking correct, the project aims to be implemented using parallel computation. Then operations like summations will be implemented by fast parallel algorithms like reduce (or predefined functions in package library). Currently, we can just have the custom library functions.

maedoc commented 8 years ago

While I agree that generally ops in loop can be replaced by parallel map reduce, I want make sure single core performance is optimal. Also, we don't (yet) have many many different cases to handle.

Summation happens often enough that it is worth special handling with a less error prone algorithm such as recursive pairwise approach, as well as taking time to use simd intrinsics.

paramhanji commented 8 years ago

Is the emphasis on better algorithms for error detection, or finding alternative, efficient algorithms for the common operations described above? The title and description seems to indicate that error detection is important. But, further conversation is directed towards alternatives.

maedoc commented 8 years ago

finding alternative, efficient algorithms for the common operations

this.

paramhanji commented 8 years ago

For summation the recursive pairwise approach appears suitable. Especially since it needs to parallelized in the future.

AbheekG commented 8 years ago

@catchchaos Parallel map-reduce (discussed earlier) is the parallel implementation of recursive pairwise approach. Moreover, recursive pairwise algorithm gives an error of O(log n), while Kahan algorithm (which although requires four times the arithmetic and has a latency of four times a simple summation) gives error of O(1) which is useful for operations which are used often enough, as noted by Marmaduke. For parallel implementation, suitable algorithm will be used. For low error, advanced algorithms like Rump’s algorithm can/will be used (which takes time approximately 7 times a simple parallel summation) and absolute error about n.2^(-28).macheps.(maximum quantity).

maedoc commented 8 years ago

Rump’s algorithm

I wasn't aware of that one, but I'm not sure if we need so much precision.

maedoc commented 8 years ago

closing in favor a new issue #155