Open hiyyg opened 2 years ago
Can you give us an example to use this feature in PyTorch? I wonder in which situation you need to use tensor.data_ptr()
@MARD1NO For example:
It seems like pytorch has no api called map_tensor. Did it implement by your team?
@MARD1NO For example: https://github.com/NVlabs/DeepIM-PyTorch/blob/b46ccd2465ce69ac575bf454d4171a5dcb9c6908/ycb_render/ycb_renderer.py#L506
It seems like pytorch has no api called map_tensor. Did it implement by your team?
@lixinqi map_tensor
is not related to pytorch. It just need a pointer like tensor.data_ptr()
in pytorch. But if oneflow can also give users such a pointer, it can be changed to oneflow.
@hiyyg Oneflow doesn't provide api tensor.data_ptr() right now for two reasons:
t = oneflow.ones((32, 32), placement=flow.placement("cuda", {0:[0, 1]}), sbp=flow.sbp.split(0))
, t is a distributed tensor (or consistent tensor) , the data of t[0:16, :] is located on device cuda 0, and the data of t[16:, :] is located on device cuda 1. Hence the behavior of t.data_ptr() is unable to be defined.We will discuss this feature request in the near future.
OK. Really hope that oneday oneflow can TRULY replace pytorch.
like that in torch, so that we can access the cuda memory address of a tensor in python.