Open sestephens73 opened 3 years ago
Is this related to #47? I feel like this might be the same issue, depending on exactly how partitioned objects work.
https://github.com/ut-parla/Parla.py/blob/5454b97113b2c4b25a6574d9c47c56bc105e9c06/parla/ldevice.py#L358-L362
Current workaround is to pass Ap.base[i]
where Ap = mapper.partition_tensor(...)
is the PartitionedTensor
object.
As an ad hoc principle,
Ap[i]
is used for operations on the data itself, e.g. assignment, computation, etc.Ap.base[i]
is for those that just need some attribute of the partition, e.g. device, size, etc.A long-term solution has not been settled so far.
If you call
reserve_persistent_memory
on an object that is partitioned over devices for automatic data movement, rather than reserving memory for the object on its device context, the automatic mapper will copy the object to the current device context, then reserve memory on the current context. Desired behavior would be leaving the object on its own device and reserving memory on that context. We need better support for this. Current workaround is to callreserve_persistent_memory
with an integer argument of the size of the object, and specify its device with thedevice
argument.