Closed lachlanbrewster closed 2 years ago
MEM.GPU was removed in Python because it couldn't be used as-is and was only there to match the C++ API. It was counterproductive as many users (rightly) thought it could be used.
To be usable we would need to implement something like CuPy buffers and CUDA Context sharing, requiring significant development effort and is currently not on our roadmap. We'll add it back if we choose to implement it later.
If you need GPU buffers for high-performance applications you should directly use the native C++ API. However, we're interested in hearing about your use case of this. If there are significant demands we could prioritize this feature. Since the wrapper is open source, pull requests to implement this is also an option.
Okay that makes sense, thank you for the clarification
I'd personally be interested in that. Most, if not all, of my processing is done in a python environment so it would be more convenient to get the speed of the GPU memory with the ease of python and the already abundant scientific tools for python
+1, this would be a super helpful feature for researchers that are doing computer vision work with Zed cameras.
We have a project currently using Zed cameras to do real-time object pose estimation with CNNs + reading GPU data directly would alleviate our main bottleneck. We currently have to read the image as CPU, then move it back to the GPU as a torch.tensor
which we think is causing a relatively large performance hit.
MEM.GPU was removed in Python because it couldn't be used as-is and was only there to match the C++ API. It was counterproductive as many users (rightly) thought it could be used.
To be usable we would need to implement something like CuPy buffers and CUDA Context sharing, requiring significant development effort and is currently not on our roadmap. We'll add it back if we choose to implement it later.
If you need GPU buffers for high-performance applications you should directly use the native C++ API. However, we're interested in hearing about your use case of this. If there are significant demands we could prioritize this feature. Since the wrapper is open source, pull requests to implement this is also an option.
Please update readme.md to clearly state Python ZED API doesn't support GPU.
Preliminary Checks
Description
I have noticed that I can no longer use GPU memory on v3.6.x, I can only choose CPU memory. I have seen that it has been scrubbed from your api documentation. What is the reason for this and will it be returned as an option in later releases?
The only mention in patch notes is "Fixed random GPU memory corruption leading to non repeatable output.". Was this by chance fixed by removing the option to use GPU memory maybe?
Thanks
Steps to Reproduce
1.import pyzed.sl as sl 2.mem_type = sl.MEM.GPU ...
Expected Result
mem_type to be equal to MEM.GPU
Actual Result
Attribute error occurs, GPU is not a valid option.
ZED Camera model
ZED2
Environment
Anything else?
No response