JinwoongKim / Massively-Parallel-Query-Processing-on-Heterogeneous-Architecture

Homepage
http://jinwoongkim.github.io/Massively-Parallel-Query-Processing-on-Heterogeneous-Architecture/
1 stars 2 forks source link

Use Pinned Memory instead of Pageable Memory #31

Closed JinwoongKim closed 8 years ago

JinwoongKim commented 8 years ago

We can avoid the cost of the transfer between pageable and pinned host arrays by directly allocating our host arrays in pinned memory.

However, It is possible for pinned memory allocation to fail, so you should always check for errors.

cudaError_t status = cudaMallocHost((void**)&h_aPinned, bytes);
if (status != cudaSuccess)
  printf("Error allocating pinned host memoryn");
JinwoongKim commented 8 years ago

I implemented it in Chunk Manager