yogevb / a-dda

Automatically exported from code.google.com/p/a-dda
0 stars 0 forks source link

Iterative solver on OpenCL (GPU) devices #199

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
On recent GPU devices the matrix vector multiplication in adda is as fast as 
the preparation of the next argument vector within the iterative solver 
(currently done by the CPU). Therefore the iterative solver should also run on 
GPU to avoid transferring vectors from host to device each iteration and to 
speed-up the computation. Since most of the functions executed by the iterative 
solvers in adda are level1 (vector) basic linear algebra functions, potentially 
the clAmdBlas library can be employed to improve the execution speed also. 
This would mainly improve computation speed on larger grids and high dipole 
counts.

Original issue reported on code.google.com by Marcus.H...@gmail.com on 31 May 2014 at 3:36

GoogleCodeExporter commented 9 years ago
The BiCG solver in OpenCL is introduced in r1349. It uses the clAmdBlas library 
for all vector related calculations. Using the USE_CLBLAS compiler option and 
the BiCG solver reduces the gap between the matrix vector multiplications to a 
small fraction of before. 
It was just tested on an AMD GPU yet.

Other solvers seem more complicated in their direct translation to OpenCL and 
they will probably perform slower. However, to give more flexibility (in case 
of poor convergence of BiCG), the translation of some more solvers seem 
desirable.

Original comment by Marcus.H...@gmail.com on 31 May 2014 at 4:49

GoogleCodeExporter commented 9 years ago
Indeed, that is a nice proof-of-principle that can be used to estimate 
potential acceleration. However, I think that a more convenient (and scalable) 
approach is to leave iterative.c almost intact, but instead concentrate on 
linalg.c. 

So all functions in the latter should be rewritten (ifdef OCL_BLAS) through 
calls to clBLAS. Actually it may be possible to use the same symbols (xvec, 
pvec, etc.) and function calls in iterative.c. The only difference is that they 
will be defined either as standard C vectors or as OpenCL vectors depending on 
the compilation mode. The actual awareness of the type of this vectors will 
only be required at the start and end of the iterative solvers (to put the 
vectors in or out of the GPU).

Original comment by yurkin on 3 Aug 2014 at 5:55