Open jaredhoberock opened 7 years ago
We might also consider a basic_vector<T, ExecutionPolicy, Allocator = some default>
type for embedding a default execution policy to use instead of seq
for vector methods overloads without an ExecutionPolicy
parameter. The default used for Allocator
would probably be whatever allocator is associated with ExecutionPolicy
's executor.
With this, we could introduce aliases such as cuda::vector<T> = basic_vector<T,cuda::parallel_policy>
.
Something equivalent to thrust::device_vector
could be done similarly, and would use cuda::device_allocator
for its choice of allocator type.
It might be useful if
vector
's execution policy constructor used the policy's underlying allocator as thevector
's allocator when it makes sense to do so.For example, this syntax:
Could be made equivalent to this syntax:
In cases where
my_allocator
is constructible frompolicy.executor().allocator<int>()
. In other cases, thevector
's allocator would just be default constructed as usual.This sort of syntax would make it convenient to construct a
vector
with affinity to a particular GPU fairly easily: