ravijo / ros_openpose

ROS wrapper for OpenPose
MIT License
142 stars 63 forks source link

wrapper for Kinect Azure camera #19

Closed AswinkarthikeyenAK closed 4 years ago

AswinkarthikeyenAK commented 4 years ago

Hi @ravijo , I am trying to use ros_openpose for Kinect Azure camera. The Kinect Azure Ros driver can be found here.

After installing all the dependencies for ros_openpose, I tried to write a wrapper for the kinect azure device based the instruction provided. I have provided the launch file. I have set the arg and subscribed to the relevant topics. launching the following file, I get an error

Value error: is not a 'bool' type The traceback for the exception was written to the log file

after I changed the default topic that subscribes for the camera calibration parameters

Can you kindly provide some assistance here to fix the error please? Feel free to let me know if you need more information

Thanks config_kinectazure (copy).txt

ravijo commented 4 years ago

Hi @AswinkarthikeyenAK

The error you reported i.e., Value error: is not a 'bool' type is possibly coming from the launch file due to incorrect syntax. I definitely suggest you look at the following working examples of launch files-

You may want to run a debugger in your launch file, please check here.

Feel free to comment. I would love to see ros_openpose working for the Kinect Azure camera.

AswinkarthikeyenAK commented 4 years ago

Hi @ravijo , Made some progress here. I am able to launch the run.launch file that runs the kinect driver and the config file that was created. I am able to vizualise the point cloud data in RViz. But I am unable to do skeleton tracking.

I am not sure about this particular argument in the config file. <arg name="frame_id" default="camera_body" /> The camera_body is the joint from the URDF from the camera. I am not sure if this is correct. Can you tell me what needs to go in here as the frame_id?

Thanks

ravijo commented 4 years ago

Hi,

It seems that you can see the followings-

Glad to hear the progress.

The frame_id is used to assign an id to the skeleton frame. It refers to the origin point of the skeleton frame. This id is used across ROS, such as for RVIZ visualization, etc. You want to make sure that the correct value is assigned to it because an incorrect value will throw the skeleton to a random 3D origin inside RVIZ.

In short, we want to overlap the skeleton with point cloud data. We are using depth frames to get depth info. Therefore you need to make sure that you assign the frame_id of depth frame to this parameter. At the same time, remember to provide correct cam_info_topic, too.

AswinkarthikeyenAK commented 4 years ago

Hi, No, I am not able to do human skeleton tracking detection with openpose window as well. I was able to open the openpose window, but there were not any skeleton tracking. I can visualize the point-cloud data without skeleton in RViz.

Also, thanks for the information about frame_id. You have mentioned in your comment that a correct value is required to be assigned. But my question was what need to specified as the frame_id? Is it something that needs to be from URDF or from where can I find the exact value of the frame id?

For instance, i looked up in kinectv2 Ros files, and I could not find "kinect2_ir_optical_frame" there. How did you specify this frame id?

Please let me know if you need more info. Thanks

ravijo commented 4 years ago

Hi @AswinkarthikeyenAK

Can you please use the following files in your project?

  1. config_azurekinect.launch: Don't forget to rename it. Use the following command to execute it-

    roslaunch ros_openpose run.launch camera:=azurekinect

    It should print the 3D coordinates in the terminal. Meanwhile, you should be able to see the detection inside the OpenPose window as well.

  2. person_pointcloud_azurekinect.rviz.txt: Don't forget to rename it. Manually, use this file in RViz. It will show you the point cloud from the camera.

Let me know the progress. It has been tested in my machine.

AswinkarthikeyenAK commented 4 years ago

Hi @ravijo, Thanks for the launch files. I tried to run the files launch file after making the necessary changes. When I launch the file, the open pose console window pops up and then there is now output.

In the terminal, I noticed the following error,

[ WARN] [1597701258.012252524]: Waiting for datum... Error Prototxt file not found: /home/open_pose/openpose/models/pose/body_25/pose_deploy.prototxt. `Possible causes:

  1. Not downloading the OpenPose trained models.
  2. Not running OpenPose from the same directory where the model folder is located.
  3. Using paths with spaces. ` In the path mentioned above, there is a file named pose_deploy.prototxt.

Please find the line modified in the run.launch file below. <arg name="openpose_args" value="-net_resolution 128x96 --camera 0 --model_folder /home/open_pose/openpose/models/ --model_pose" />

Openpose works fine when running it directly using the following commands -camera_resolution 640x480 -net_resolution 128x96 --camera 0

Kindly let me know if I am missing something here. Thank you for your time.

AswinkarthikeyenAK commented 4 years ago

Hi @ravijo , Fixed the previous error [Error Prototxt file not found]. But yet I am unable to solve the issues with the openpose console. The console pops up but cannot see any output. The console becomes grey.

In the terminal window, I noticed the following,

[ WARN] [1597752455.110829289]: Waiting for datum...
[ WARN] [1597752462.086175956]: Waiting for color image frame...
[INFO] [1597752462.182616]: [x: 1382.0637207
y: 520.540100098, x: 1218.11791992
y: 682.417724609, x: 1072.24780273
y: 633.846923828, x: 1007.66003418
y: 942.556762695, x: 1382.01049805
y: 976.009033203, x: 1348.83227539
y: 698.646728516, x: 1413.88232422
y: 942.575256348, x: 1511.85571289
y: 958.788269043, x: 1008.00073242
y: 1218.91125488, x: 1218.77209473
y: 1316.27380371, x: 0.0
y: 0.0, x: 1235.48376465
y: 1202.98779297, x: 1462.73388672
y: 1511.42089844, x: 0.0
y: 0.0, x: 1365.33227539
y: 456.047821045, x: 0.0
y: 0.0, x: 1235.99304199
y: 455.566131592, x: 0.0
y: 0.0]

[ERROR] [1597752462.447914]: bad callback: <bound method RealtimeVisualization.frame_callback of <__main__.RealtimeVisualization instance at 0x7f82426c80>>
Traceback (most recent call last):
  File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback
    cb(msg)
  File "/home/ak/catkin_ws/src/ros_openpose/scripts/visualizer.py", line 144, in frame_callback
    body_marker[index].points = [person.bodyParts[idx].point for idx in body_part if self.isValid(person.bodyParts[idx])]
IndexError: list index out of range

[ERROR] [1597752462.979111]: bad callback: <bound method RealtimeVisualization.frame_callback of <__main__.RealtimeVisualization instance at 0x7f82426c80>>
Traceback (most recent call last):
  File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback
    cb(msg)
  File "/home/ak/catkin_ws/src/ros_openpose/scripts/visualizer.py", line 144, in frame_callback
    body_marker[index].points = [person.bodyParts[idx].point for idx in body_part if self.isValid(person.bodyParts[idx])]
IndexError: list index out of range

[INFO] [1597752462.980736]: [x: 1382.06408691
y: 520.425720215, x: 1218.23754883
y: 682.420654297, x: 1072.30224609
y: 633.846313477, x: 992.027954102
y: 942.834228516, x: 1397.73803711
y: 976.227539062, x: 1349.04248047
y: 698.64239502, x: 1413.93566895
y: 942.830993652, x: 1527.73303223
y: 958.943908691, x: 1039.98583984
y: 1219.05419922, x: 0.0
y: 0.0, x: 0.0
y: 0.0, x: 1251.15441895
y: 1218.85888672, x: 1446.37158203
y: 1511.60253906, x: 0.0
y: 0.0, x: 1365.3392334
y: 455.972015381, x: 0.0
y: 0.0, x: 1235.95178223
y: 455.499237061, x: 0.0
y: 0.0]

[INFO] [1597752463.511045]: [x: 1382.05517578
y: 536.416442871, x: 1186.69311523
y: 682.386047363, x: 1040.20556641
y: 617.983581543, x: 1023.88751221
y: 958.379638672, x: 1365.91015625
y: 991.463256836, x: 1332.32958984
y: 699.08001709, x: 1430.0045166
y: 958.937255859, x: 1495.00256348
y: 861.627563477, x: 1056.74279785
y: 1235.52197266, x: 0.0
y: 0.0, x: 0.0
y: 0.0, x: 1236.01306152
y: 1235.07800293, x: 0.0
y: 0.0, x: 0.0
y: 0.0, x: 1365.38989258
y: 456.232727051, x: 0.0
y: 0.0, x: 1235.88085938
y: 455.448364258, x: 0.0
y: 0.0]

[ERROR] [1597752463.513085]: bad callback: <bound method RealtimeVisualization.frame_callback of <__main__.RealtimeVisualization instance at 0x7f82426c80>>
Traceback (most recent call last):
  File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback
    cb(msg)
  File "/home/lams/catkin_ws/src/ros_openpose/scripts/visualizer.py", line 144, in frame_callback
    body_marker[index].points = [person.bodyParts[idx].point for idx in body_part if self.isValid(person.bodyParts[idx])]
IndexError: list index out of range

[ WARN] [1597752465.008793953]: Waiting for datum...
[ WARN] [1597752472.094717145]: Waiting for color image frame...
[ WARN] [1597752475.011401042]: Waiting for datum...
[ WARN] [1597752481.838410622]: Waiting for color image frame...
[ WARN] [1597752485.016863770]: Waiting for datum...
[ WARN] [1597752491.838509336]: Waiting for color image frame...
[ WARN] [1597752495.018291123]: Waiting for datum...
[ WARN] [1597752501.848509184]: Waiting for color image frame...
[ WARN] [1597752505.023092456]: Waiting for datum...
[ WARN] [1597752511.852989200]: Waiting for color image frame...
[ WARN] [1597752515.034455380]: Waiting for datum...

After the values are printed 3 times with the errors and then the warning continues. please find the openpose console window that opens up after roslaunch.

ros_openpose_console

Thanks for you time

AswinkarthikeyenAK commented 4 years ago

Hi @ravijo , I have fixed the issue with the openpose and other issues I had with the kinect azure device.

But, currently I am having the following error.

[ERROR] [1598044032.342585]: bad callback: <bound method RealtimeVisualization.frame_callback of <__main__.RealtimeVisualization instance at 0x7f89f5d550>>
Traceback (most recent call last):
  File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback
    cb(msg)
  File "/home/ak/catkin_ws/src/ros_openpose/scripts/visualizer.py", line 144, in frame_callback
    body_marker[index].points = [person.bodyParts[idx].point for idx in body_part if self.isValid(person.bodyParts[idx])]
IndexError: list index out of range
ravijo commented 4 years ago

Hi @AswinkarthikeyenAK

Are you using the latest version? You should not face such an issue in the newest version.

Please note that config_azurekinect.launch and person_pointcloud_azurekinect.rviz provided above works without showing any error in my system.

However, please note that at this moment for Azure Kinect camera, 3D conversation from image pixels is not working correctly. It is due to a different distortion model used by Azure Kinect camera. So far, ros_openpose assumes the plumb_bob model but Azure Kinect is found using rational_polynomial distortion model. This is why 2D pixel to the 3D coordinate calculation is returning the wrong points.

Wait for a while. I am looking for a way to incorporate the rational polynomial distortion model as well.

Nevertheless, you should be able to run Azure Kinect camera with ros_openpose in image(pixel) space.

AswinkarthikeyenAK commented 4 years ago

Hi, I am using the openpose version 1.5.1.Do you want me to change to latest version? And you can visualize the skeleton image in Rviz? Do mind tell me what are the errors you are facing with the kinect azure sensor? I have been working on last week to figure out a way fix a few error. I can help you with that if required.

ravijo commented 4 years ago

Hi @AswinkarthikeyenAK

The following error is thrown by visualizer.py which basically uses converted 3D values to compose a skeleton of the human and renders it in RViz.

[ERROR] [1598044032.342585]: bad callback: <bound method RealtimeVisualization.frame_callback of <main.RealtimeVisualization instance at 0x7f89f5d550>> Traceback (most recent call last): File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback cb(msg) File "/home/ak/catkin_ws/src/ros_openpose/scripts/visualizer.py", line 144, in frame_callback body_marker[index].points = [person.bodyParts[idx].point for idx in body_part if self.isValid(person.bodyParts[idx])] IndexError: list index out of range

As I said above, right now, 2D to 3D conversion for Azure Kinect camera is broken. Therefore, we can disable 3D rendering inside Rviz by unsetting the skeleton flag. Please use the following command-

roslaunch ros_openpose run.launch camera:=azurekinect skeleton:=false

I am using the openpose version 1.5.1. Do you want me to change to latest version?

The error you reported doesn't seem to relate to OpenPose. However, there is no harm checking with the latest version.

Do mind tell me what are the errors you are facing with the kinect azure sensor?

There is no error. It is working fine in my system. Please read my complete response here.

Cheers!

AswinkarthikeyenAK commented 4 years ago

Hi @ravijo, I am changing to openpose v1.0.6 now.

When I use openpose model --BODY_25, I dint get the error from visualizer.py. But I noticed I got the error, while I used COCO. I am not sure why this is caused. I am letting you know so that you can test in your machine just in case. I will test launching ros_openpose with the suggestion you gave. Thanks!!!

AswinkarthikeyenAK commented 4 years ago

Hi @ravijo , Did you by any chance get a error like this?

../src/capturesync/capturesync.c (142): replace_sample(). capturesync_drop, releasing capture early due to full queue TS: 442084411 type:Depth

or capturesync_drop, releasing capture early due to full queue TS: 451066 type:Color

Thanks

ravijo commented 4 years ago

Hi @AswinkarthikeyenAK

Please pull the latest version of ros_openpose. I have just added the support for the Azure Kinect camera. Please note that I have verified it and found it working on my machine (Ubuntu 18). I hope your query is solved.

I am closing the issue now. Please feel free to open the issue and comment if the problem persists.

AswinkarthikeyenAK commented 4 years ago

Hi @ravijo , Thanks. Can you kindly let me what resolution you were using for the camera? and what was you output fps?

ravijo commented 4 years ago

Hi @AswinkarthikeyenAK

I was running the camera in default parameters. The output FPS in OpenPose was 4 FPS (with CUDA+ cuDNN).

Later on, I noticed that the default FPS parameter is set to 5 in Azure Kinect. Therefore, 4 FPS is not bad in this case.

Furthermore, an explicit sleep is defined inside ros_openpose just to lower CPU usage. You may try to decrease of remove this sleep. Please see here.

AswinkarthikeyenAK commented 4 years ago

It should print the 3D coordinates in the terminal. Meanwhile, you should be able to see the detection inside the OpenPose window as well.

Hi @ravijo, The output result prints the X and Y coordinates in the terminal. How do you get the the 3D coordinates?

Thanks

ravijo commented 4 years ago

Hello @AswinkarthikeyenAK

The output result prints the X and Y coordinates in the terminal. How do you get the the 3D coordinates?

As you said, the terminal shows only pixel values. These pixels are converted to 3D world coordinates using depth image and the camera's intrinsic parameter.

AswinkarthikeyenAK commented 4 years ago

Hi @ravijo , First, I'd like to thanks for your support. I noticed that the skeleton image in RViz is quite distorted while visualizing it. rviz_kinect Azure Rviz

Did you have the same issue while visualizing this in RViz or was it working fine in you system? I am just wondering if this is due the 2D to 3D conversion you had mentioned earlier or something else?

However, please note that at this moment for Azure Kinect camera, 3D conversation from image pixels is not working correctly. It is due to a different distortion model used by Azure Kinect camera. So far, ros_openpose assumes the plumb_bob model but Azure Kinect is found using rational_polynomial distortion model. This is why 2D pixel to the 3D coordinate calculation is returning the wrong points.

Thanks again

ravijo commented 4 years ago

Hi @AswinkarthikeyenAK

I am happy to see that you made it work. Great!

I noticed that the skeleton image in RViz is quite distorted while visualizing it.

Some of the values published under the human skeleton visualization topic are invalid as I can see "Uninitialized quaternion, assuming identity" warning message in the screenshot.

Did you have the same issue while visualizing this in RViz or was it working fine in you system?

Yes. I noticed the same warning message "Uninitialized quaternion, assuming identity" with the Azure Kinect camera.

I am just wondering if this is due the 2D to 3D conversion you had mentioned earlier or something else?

I am not sure. All I can say that this warning message comes only with the Azure Kinect camera. Perhaps, we need to explore more about the Azure Kinect camera. Frankly speaking, when you opened the issue and pointed out that you are trying to use Azure Kinect camera, I decided to get one and quickly made the configuration files. So, I have gained in-depth exposure to this camera. For example, I used k4aviewer to see color and depth images, which look great. However, when I saw the point cloud inside k4aviewer (and rviz too), it was not so satisfactory to me. I think that this camera has applied so many filters to process the point cloud data that unfortunately the output is not that great (as it was in the case of RealSense).

I would suggest you look at the depth values and then manually verify the coordinates of the skeleton point in 3D space. Give it a try by changing the depth (and camera calibration topic too) topic.