ut-parla / Parla.py

A Python based programming system for heterogeneous computing
Other
21 stars 10 forks source link

Reserving memory for an automatically partitioned object will not behave as desired #49

Open sestephens73 opened 3 years ago

sestephens73 commented 3 years ago

If you call reserve_persistent_memory on an object that is partitioned over devices for automatic data movement, rather than reserving memory for the object on its device context, the automatic mapper will copy the object to the current device context, then reserve memory on the current context. Desired behavior would be leaving the object on its own device and reserving memory on that context. We need better support for this. Current workaround is to call reserve_persistent_memory with an integer argument of the size of the object, and specify its device with the device argument.

arthurp commented 3 years ago

Is this related to #47? I feel like this might be the same issue, depending on exactly how partitioned objects work.

bozhiyou commented 3 years ago

https://github.com/ut-parla/Parla.py/blob/5454b97113b2c4b25a6574d9c47c56bc105e9c06/parla/ldevice.py#L358-L362 Current workaround is to pass Ap.base[i] where Ap = mapper.partition_tensor(...) is the PartitionedTensor object.

As an ad hoc principle,

A long-term solution has not been settled so far.