Closed KungZell closed 2 years ago
Hi,
Main modfication would be to main.py: https://github.com/John-Dean/DepthAI-V2V-PoseNet/blob/main/main.py
Adjust the CAMERA_ROTATION to 0 (as you won't be putting the camera on its side), and then modify lines 19 => 40, and 51 =>55 to use the Kinect API.
The end goal would be to have 2 frames, one RGB and the other depth, stored in "color_frame" and "depth_frame" inside the while loop (lines 54/55), as well as the intrinsics and extrinsics (and width and height of the frames) populated on line 38. These are used by Open3d here: https://github.com/John-Dean/DepthAI-V2V-PoseNet/blob/main/fullbody_tracking/depth_to_pointcloud.py to convert the depth image into a point cloud, and for a Kinect camera should be fairly easy to find if there isn't an API call to grab them from device.
Hopefully that helps?
Regarding the image, that is the exact output from the trained model. The model itself is lifted from here: https://github.com/John-Dean/V2V-PoseNet-PyTorch and was trained using the ITOP dataset (instructions for replicating that can be found in the readme of that repo). The ITOP dataset is captured using a Kinect camera. I think that image is taken from the author's orginal V2VPoseNet repo, which I found basically unusable so I re-wrote to make it easier to work with (which is what that repo is).
Regards, John
Also worth noting I have a vague plan about getting this to work on iOS devices with LiDAR sensors on the back, as they also have depth capability, but I don't own a Mac to setup a test of this and I'm a bit busy at the moment, so it's not an embarassing question at all. The repo is designed to be portable to different types of depth cameras.
Thank you very much for your quick reply in your busy schedule. I will debug and deploy as you see fit. Thank you again for your generous help.
Best Regards.
Thank you for the great project. Here's an embarrassing question, I don't have a DepthA camera, but I do have Azure Kinect So I want to change the depth image interface into Kinect. Can you give me some operational suggestions?
In addition, I would like to ask whether the project can achieve the effect as shown below. Thank you again for the project. Look forward to your reply .