AccelerateHS / accelerate

Embedded language for high-performance array computations
https://www.acceleratehs.org
Other
901 stars 118 forks source link

Asynchronous execution #53

Open tmcdonell opened 12 years ago

tmcdonell commented 12 years ago

@rrnewton notes in #48 that the current (driver default) behaviour is to spin when waiting for GPU operations to complete, which is not friendly towards other Haskell threads that want to do useful work. We should change this to something that is gentler with CPU resources (CU_CTX_SCHED_BLOCKING_SYNC).

Tangentially related to #13.

tmcdonell commented 11 years ago

Asynchronous execution entails using non-default stream(s) and event waiting for dependencies.

With support for streams and events, we should also (correctly) support asynchronous memory transfer, which additionally requires:

tmcdonell commented 11 years ago

See also:

robstewart57 commented 10 years ago

Note: this issue is further discussed in June/July 2014 on the accelerate mailing list here.

tmcdonell commented 8 years ago

This is all possible now, just not exposed very nicely yet. See this profiler output, where compute and data transfer overlaps nicely with full-speed DMA to pinned memory:

screenshot 2016-02-10 15 55 31

Also note this example however, where the CUDA pinned memory allocator is (a) not concurrent, and (b) can be teeeerribly slow:

screenshot 2016-02-10 15 56 31

So we may want to do a nursery-style caching allocator. These screenshots are from different machines, and the latter is a 2-GPU box, so may have further strangeness going on...