Data.Array.Accelerate
defines an embedded language of array computations for high-performance computing in Haskell. Computations on multi-dimensional, regular arrays are expressed in the form of parameterised collective operations (such as maps, reductions, and permutations). These computations are online-compiled and executed on a range of architectures.
For more details, see our papers:
There are also slides from some fairly recent presentations:
Chapter 6 of Simon Marlow's book Parallel and Concurrent Programming in Haskell contains a tutorial introduction to Accelerate.
Trevor's PhD thesis details the design and implementation of frontend optimisations and CUDA backend.
Table of Contents
As a simple example, consider the computation of a dot product of two vectors of single-precision floating-point numbers:
dotp :: Acc (Vector Float) -> Acc (Vector Float) -> Acc (Scalar Float)
dotp xs ys = fold (+) 0 (zipWith (*) xs ys)
Except for the type, this code is almost the same as the corresponding Haskell code on lists of floats. The types indicate that the computation may be online-compiled for performance; for example, using Data.Array.Accelerate.LLVM.PTX.run
it may be on-the-fly off-loaded to a GPU.
Package Accelerate is available from:
cabal install accelerate
git clone https://github.com/AccelerateHS/accelerate.git
To install the Haskell toolchain try GHCup.
The following supported add-ons are available as separate packages:
Install them from Hackage with cabal install PACKAGENAME
.
The accelerate-examples package provides a range of computational kernels and a few complete applications. To install these from Hackage, issue cabal install accelerate-examples
. The examples include:
LULESH-accelerate is in implementation of the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics (LULESH) mini-app. LULESH represents a typical hydrodynamics code such as ALE3D, but is a highly simplified application, hard-coded to solve the Sedov blast problem on an unstructured hexahedron mesh.
Accelerate users have also built some substantial applications of their own. Please feel free to add your own examples!
The Accelerate team (past and present) consists of:
The maintainer and principal developer of Accelerate is Trevor L. McDonell trevor.mcdonell@gmail.com.
accelerate-haskell@googlegroups.com
(discussions on both use and development are welcome)If you use Accelerate for academic research, you are encouraged (though not required) to cite the following papers:
Manuel M. T. Chakravarty, Gabriele Keller, Sean Lee, Trevor L. McDonell, and Vinod Grover. Accelerating Haskell Array Codes with Multicore GPUs. In DAMP '11: Declarative Aspects of Multicore Programming, ACM, 2011.
Trevor L. McDonell, Manuel M. T. Chakravarty, Gabriele Keller, and Ben Lippmeier. Optimising Purely Functional GPU Programs. In ICFP '13: The 18th ACM SIGPLAN International Conference on Functional Programming, ACM, 2013.
Robert Clifton-Everest, Trevor L. McDonell, Manuel M. T. Chakravarty, and Gabriele Keller. Embedding Foreign Code. In PADL '14: The 16th International Symposium on Practical Aspects of Declarative Languages, Springer-Verlag, LNCS, 2014.
Trevor L. McDonell, Manuel M. T. Chakravarty, Vinod Grover, and Ryan R. Newton. Type-safe Runtime Code Generation: Accelerate to LLVM. In Haskell '15: The 8th ACM SIGPLAN Symposium on Haskell, ACM, 2015.
Robert Clifton-Everest, Trevor L. McDonell, Manuel M. T. Chakravarty, and Gabriele Keller. Streaming Irregular Arrays. In Haskell '17: The 10th ACM SIGPLAN Symposium on Haskell, ACM, 2017.
Accelerate is primarily developed by academics, so citations matter a lot to us. As an added benefit, you increase Accelerate's exposure and potential user (and developer!) base, which is a benefit to all users of Accelerate. Thanks in advance!
Here is a list of features that are currently missing:
Many more features... contact us!