When users get the ort values and do stuff to them, what the experience will be: Currently they will call .numpy() and ortvalue_from_numpy() and some io binding if the tensor is on cuda. With __dlpack__ users can simply call torch.from_dlpack(ort_value) to get the cuda tensor in pytorch (and similar methods in other frameworks). No io binding setups needed.
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
When users get the ort values and do stuff to them, what the experience will be: Currently they will call .numpy() and ortvalue_from_numpy() and some io binding if the tensor is on cuda. With
__dlpack__
users can simply calltorch.from_dlpack(ort_value)
to get the cuda tensor in pytorch (and similar methods in other frameworks). No io binding setups needed.Related: https://github.com/microsoft/onnxruntime/issues/15963
cc @yuslepukhin