Closed Chronum94 closed 7 years ago
This adds increased complexity in the codebase, increased compilation times, and increased binary sizes. We will also need to implement BLAS and FFT libraries for this data type (across 3 different backends).
Are the benefits from this really worth all this effort?
Heh. When you put it that way, one is more inclined to just make the application-specific implementation of it. Point taken.
The idea that I had while suggesting it was that this would allow more affordable hardware to do FP64 work, but the full perspective of the required work presents a near-unnecessary challenge.
which may not be possible to use for many purposes (particularly PDE simulations, FDTD, FDFD, BPM, et al)
BTW as far as I can tell these types of problems are bandwidth bound so the computation flops being low should not be an issue.
Closing this issue for now.
Consumer-grade GPUs are severely stunted when it comes to FP64 computing power, and many of the scientific applications that can potentially gain speedups of >20-30x are hindered because of that performance gain only being possible on FP32, which may not be possible to use for many purposes (particularly PDE simulations, FDTD, FDFD, BPM, et al)
http://hal.archives-ouvertes.fr/docs/00/06/33/56/PDF/float-float.pdf Guillaume Da Graça, David Defour Implementation of float-float operators on graphics hardware, 7th conference on Real Numbers and Computers, RNC7.
http://andrewthall.org/papers/df64_qf128.pdf Andrew Thall Extended-Precision Floating-Point Numbers for GPU Computation.
Those are two papers on the implementation of double-floats on GPUs which the devs may find useful in its implementation.
Typically reported performance is ~40% of FP32 performance, which is an order of madnitude better than the 1/24 (~4%) FP64:FP32 performance available in almost all consumer-grade GPUs.