Open lucaparisi91 opened 4 months ago
Custom allocators can be used on the device using the use_allocators clause ( OpenMP 5.1) . This is supported in llvm 18.1 but not nvidia nvhpc 24.5 . I could not find a way to allocate a vector using a custom allocator.
int *shared_ptr =
omp_alloc(N * sizeof(int), llvm_omp_target_shared_mem_alloc);
#pragma omp target is_device_ptr(shared_ptr)
{ ... }
omp_free( shared_ptr, llvm_omp_target_shared_mem_alloc )
double c[BLOCK_SIZE];
#pragma omp allocate(c) allocator(omp_pteam_mem_alloc)
One some compilers ( i.e. nvidia ) statically allocated arrays might be placed in shared memory instead of global memory even without the allocate statements.
Predefined allocators did not seem to be supported on the target on nvhpc 24.5
The Cray documentation instead suggests that omp_cgroup_mem_alloc
should be used.
uses_allocators
clause.#pragma omp target teams num_teams(1) reduction(+:sum) shared(my_allocator) private(c) uses_allocators(omp_pteam_mem_alloc) allocate(omp_pteam_mem_alloc:c)
{
....
}
This is supported in llvm 18.1 (clang) but not in nvida nvhpc 24.5
One can define a host memory allocator to allocate pinned memory. Should map to cuda alloc. Better to use the omp_alloc
subroutine, as the allocate clause is still poorly supported across compilers.
omp_memspace_handle_t c_memspace = omp_default_mem_space;
omp_alloctrait_t c_traits[2] = { { omp_atk_pinned , true }, {omp_atk_alignment, 128} } ;
omp_allocator_handle_t c_alloc = omp_init_allocator(c_memspace,2,c_traits);
c = (double *) omp_alloc( n * sizeof(double),c_alloc);
This seemed to compile but to be ignored from the nvidia nvc++ sdk 24.5 compiler.
Show how to allocate mpi shared memory and pinned memory .
Examples in
custom_allocations
HPE seems to be have some cray specific allocations based on last slide of https://www.openmp.org/wp-content/uploads/2022-04-29-ECP-OMP-Telecon-HPE-Compiler.pdf. There is also some material on allocators at https://doku.lrz.de/files/11497064/11497068/1/1684602040267/OpenMP+Workshop+Day+2.pdf . Pinned memory is possible, as well as shared memory allocation.