Hello, I'm making good use of the open source you provided.
When I use it, I find an issue and fix it to contribute to your project
issue what I find
I want to use set_shared_memory_region_from_dlpack for GPU to GPU shared memory
I converted pytorch tensor to dlpack using by to_dlpack from dlpack library but it raises error in this line
# it work well
img_tensor = img_tensor_raw.resize_(target_height, target_width, 3)
cudashm.set_shared_memory_region_from_dlpack(cuda_shm_ip_handle, [img_tensor])
# it doesn't work
img_tensor = img_tensor_raw.resize_(1, target_height, target_width, 3)
cudashm.set_shared_memory_region_from_dlpack(cuda_shm_ip_handle, [img_tensor])
although I make pytorch tensor contiguous, after making it dlpack strides corrupts
it is known issue pytorch and the maintainer says it is not an issue but logic for preventing another issue.
So to_dlpack make stride in tensor's dimension shape is 1 just 1
so, I fix is_contiguous logic to skip contiguous checking when shape is 1 and it work well in my code.
If the contribution process is in the project, I will follow it.
thanks
Hello, I'm making good use of the open source you provided. When I use it, I find an issue and fix it to contribute to your project
issue what I find
I want to use
set_shared_memory_region_from_dlpack
for GPU to GPU shared memory I converted pytorch tensor to dlpack using byto_dlpack
from dlpack library but it raises error in this linealthough I make pytorch tensor contiguous, after making it dlpack strides corrupts it is known issue pytorch and the maintainer says it is not an issue but logic for preventing another issue. So
to_dlpack
make stride in tensor's dimension shape is 1 just 1so, I fix is_contiguous logic to skip contiguous checking when shape is 1 and it work well in my code. If the contribution process is in the project, I will follow it. thanks