Open eyast opened 1 year ago
@eyast there are CUDA functions in jetson-utils for applying image de-warping given the intrinsic camera calibration parameters:
however, these don't have a Python API yet (C++ only) so yes, you could have OpenCV perform the de-warping first:
Hi I'm wondering if anybody can help. I am trying to build an application that can map the 3D location of a human being By first extracting keypoints using PoseNet. I built a custom neural network that ingests features Provided by PoseNet. The problem that I have is that the accuracy of my application is very low and I suspect that it is due to the fact that I haven't Calibrated my camera. As I'm reading more about this and it seems that this is performed using Open CV. I was wondering if anyone had ideas on how to incorporate A calibration matrix into my solution.
PS: my knowledge of CUDA is limited, and I can not access SDK/intellisense information in my IDE, so I am not very sure what I can / can not send to
net.Process()
(even after reading the SDK page which states that a CUDA memory capsule is expected) - I can't seem to find information on how to construct that.Should I calibrate my camera first in OpenCV, and then transform what videoSource is capturing to a numpy array - apply the un-distortion effect - and bring it back to a CUDA memory capsule for processing by PoseNet? if that's the high-level methodology, any ideas on the approach/gotchas/things to keep in mind to accelerate inference?