Closed eshafeeqe closed 5 years ago
Hi, all you have to do is making another srcdata/$DBNAME/data.lua
following other src/data/$DBNAME/data.lua
. Maybe src/data/NYU/data.lua
can give you the minimal example.
Hi @eshafeeqe . Did you manage to write a test code for your own depth maps? I want to try the same thing but I have no idea how to read these Lua codes and use them. Can you please share with me if you did it?
Hello @eshafeeqe @otepencelik May I ask if you have any code for depth camera testing using Kinect? Can you make it public? Thank you very much.
@KungZell and @otepencelik it's been a while since I asked this, unfortunately, I was unable to get skeleton inference from depth images as the author said. I used OpenPose (https://github.com/CMU-Perceptual-Computing-Lab/openpose), which gives a 2D skeleton from image files.
hello, @eshafeeqe Thank you very much for your reply. In fact, I'm still trying a bit, but I'm not quite sure how to get the camera parameters and depth data of the kinect dk with python, can you give some advice on this? Thank you very much
Hello @KungZell,
I am not working on this problem anymore. I had been working with Kinect V2 at that time and used some camera calibration methods to estimate camera parameters. I hope you can find such methods for your Kinect.
Okay, thank you very much for your help.
Hello,
First of all thanks for making this awesome project into the public.
I am wondering whether I can use this approach for getting human pose joints from a depth image captured by my Kinect sensor. If yes can you give me a minimal example of how to do it? Since I am new to torch7 and Lua programming it's little hard for me to understand the codes.
Any help will be much appreciated.
Thanks and Regards, Shafeeq E