Marcel-Rodekamp / NSL

Nanosystem Simulation Library (NSL) implements statistical simulations for systems on the nanoscale
Other
2 stars 0 forks source link

Feature/gpu basics #112

Closed Marcel-Rodekamp closed 2 years ago

Marcel-Rodekamp commented 2 years ago

General Support I Implemented GPU support via

NSL::Tensor<Type> TensorName(NSL::GPU(), sizes... );

where the default construction --- omitting the NSL::GPU --- creates CPU tensors. The handle is in general of Type NSL::Device and is meant to encompass all the information about the device (so far only what device to pick). Further, I added a copy function

NSL::Tensor NSL::Tensor::to(const NSL::Device & dev, bool non_blocking = false)

which creates a copy of the tensor. As well as an inplace copy function

void NSL::Tensor::to(bool inplace, const NSL::Device & dev, bool non_blocking = false)

to move a tensor directly onto the device without the need of creating a new instance.

The idea is, as soon as a tensor is on the device, i.e. on gpu/cpu, all computations involving it will be executed on that device. If two Tensors on different devices are combined a runtime error is thrown.

Fermion Matrix Enabling the fermion matrix for GPU computation required a little change for the lattice types. Hence the NSL::Lattice::SpatialLattice got a new function

void NSL::Lattice::SpatialLattice::to(const NSL::Device & dev);

copying the internal tensors to the device (inplace).