Closed manjuj123 closed 5 years ago
If you know the camera calibration then you can find those coordinates. Otherwise, if you know the person skeleton's size that would work as well. However, an additional step would be required, which is computing the translation of the pose.
Great work! Is it possible to compute the orientation of a person relative to the ground plane if the ground plane vector is known relative to the camera coordinate?
yes, in that case it's possible
Thanks! I've been trying understand what the required calculation is, I'd be grateful for any assistance!
Problem: Given a 'lifting from the deep' 3D Pose, calculate orientation of spine of the pose relative to the real world ground plane (=are they lying parallel to the floor or standing?).
I've attached a diagram of my understanding: the poses from 'lifting from the deep' are camera-centric by default. In the attached diagram: X,Y,Z are real-world coordinates, x,y,z are the camera coordinate frame with z being the principal axis.
This means if the person is standing in the center of the image, they would be in-line with the principal axis. If we assume we know the angle (alpha) of the camera relative to the wall (or we could calculate it with real-world distances X,Y) then it would seem that the answer to the problem is simple: it is the same angle (=alpha) from trigonometry.
However if the person is not standing in the center of the original uncropped image, I am unsure how to obtain beta. I think beta would be required to calculate the corresponding orientation of the ground plane relative to the principle axis (and thus to solve the problem).
Thanks for any assistance!
I suppose my confusion is related to the fact that when we crop the original image (e.g. of Person A in the center) to someone on the edge of the image (Person B), the resulting 'lifting from the deep' pose is in a different coordinate system to the original image, as now the principal axis points through the cropped person (Person B). So the question becomes how to get both onto the same coordinate system (and therefore be able to compare their orientation against a fixed reference vector representing the 'floor').
Hello,
I used the 3D pose estimator code to identify the points and am trying to convert them into real world co-ordinates. In order to do this, I need to know the depth value.
Any thoughts on how I can get the depth value? Thanks.