UM-ARM-Lab / unity_victor_teleop

Unity project for teleoperation of ARM-lab's "Victor" robot using an HTC Vive
Apache License 2.0
7 stars 2 forks source link

Make Standalone Kinect Unity Demo #2

Closed bsaund closed 4 years ago

bsaund commented 4 years ago

Many people have a RGBD sensor, but only our lab has a "Victor" Robot. I've received several requests for explanations of the kinect data specifically. This could be made into a demo without Victor involved at all, which would make this easier to install on other systems

rdo50 commented 4 years ago

Hi, May I please ask which kinect sensor did you exactly use? Kinect Xbox one or Kinect V2. They look very similar but use different ROS drivers. And I didn't make it to find your ROS package that is used to publish camera messages from Kinect. Thanks a million.

image

Another thing is you used /kinect2_victor_head/qhd/image_color_rect/compressed and /kinect2_victor_head/qhd/image_depth_rect/compressed topics to send data to Unity. Where can I find the ROS nodes scripts that publish these topics? As you know, the original topics from Kinect are normally named /camera/image.... How do I decide which original Kinect topics to use in my own case? Thanks a million

bsaund commented 4 years ago

We use the kinect for XBox with the iai_kinect2 driver https://github.com/code-iai/iai_kinect2

As we use multiple cameras in our lab, we remap the camera namespace in ROS to kinect2_victor_head for the kinect on Victor's (our robot's) head. You will need the color and depth image from the kinect. We use the qhd or "quarter HD" topic, as full HD is too slow. The rect indicates the rectified image, i.e. distortion is removed to create images as if they were from a pinhole camera. I expect the code to work fine without rectified images, but expect to see more distortion.

stevensu1838 commented 4 years ago

Hi Buddy,

I've been following your advice and I still have some problems on this topic.

I am using a Kinect 360 camera (using libfreenect driver) coz couldn't find a Kinect V2 (using iai_kinect2 driver).

I know the project should be ready to use. However, I noticed that I just DepthImageListener script doesn't run when it has private NativeArray decompressedDepth datatype. For this reason, I tried to replace data type with data type and I simply made the following two changes in the DepthImageListener script and Kinectview script: image image

Then everything works and my setting is Unity is shown as follow: image

For now, I still have two questions:

  1. I do receive both RGB and depth images in Unity but they are not aligned. Also, when I put my left arm in front of the Kinect camera, I only see one left arm in the rgb. However, it shows more than one depth images of my arm or hand. Can you please tell me why? And how shall I adjust? Do just need to find a Kinect V2? image image Video clip of this problem

Also, in unity the screen for poinclouds are huge and far from the robot. The setting is very different from your excellent pancake demo video on youtube which is actually my target. How can I chang the position of the image screen coz it is not a gameobject in Hierachy image

  1. I do see the point could in Unity. But these points are not coloured and very hard to use them to represent shapes of objects. It just not as good as your point cloud in your demo. I am counting on you could you please help? Thanks a million image
bsaund commented 4 years ago

Your current Kinect should work fine, but you might need to adjust some numbers. Make sure the size of your kinect images matches the width and height in the "Depth Image Listener" Topic. I suspect the numbers I entered (960 x 540) are larger than your image, and thus the Depth Image Listener attempts to read off the end of the message's array, which explains the behavior you observed of "DepthImageListener script doesn't run". It'd be better for me to auto-populate this field, but I remember there was some problem with that.

You see the 2 arms because you changed the <short> to <byte>. The depth image is an array of shorts, so when you read it as an array of bytes you are no longer aligned. Change this back to <short> and TextureFormat.R16 The current point cloud looks bad because of what I just mentioned: You are reading the depth data as Also, the position of the pointcloud is relative to the position of the Kinect. In our lab we have a motion capture system tracking the kinect and publishing the pose of the kinect. This pose is read by the kinect_head_points Posed Stamped Subscriber. You can either publish this pose over ROS, or hardcode the Kinect_head_points Transform.

Best,

On Wed, Feb 19, 2020 at 6:54 AM stevensu1838 notifications@github.com wrote:

Hi Buddy,

I've been following your advice and I still have some problems on this topic.

I am using a Kinect 360 camera (using libfreenect driver) coz couldn't find a Kinect V2 (using iai_kinect2 driver).

I know the project should be ready to use. However, I noticed that I just DepthImageListener script doesn't run when it has private NativeArray decompressedDepth datatype. For this reason, I tried to replace data type with data type and I simply made the following two changes in the DepthImageListener script and Kinectview script: [image: image] https://user-images.githubusercontent.com/25648734/74830917-0ea40580-5379-11ea-9297-e8060e8dc5a0.png [image: image] https://user-images.githubusercontent.com/25648734/74830972-28454d00-5379-11ea-851d-5f03f725586f.png

Then everything works and my setting is Unity is shown as follow: [image: image] https://user-images.githubusercontent.com/25648734/74831160-8bcf7a80-5379-11ea-89e6-ff4d933c8d9f.png

For now, I still have two questions:

  1. I do receive both RGB and depth images in Unity but they are not aligned. Also, when I put my left arm in front of the Kinect camera, I only see one left arm in the rgb. However, it shows more than one depth images of my arm or hand. Can you please tell me why? And how shall I adjust? Do just need to find a Kinect V2? [image: image] https://user-images.githubusercontent.com/25648734/74831605-70b13a80-537a-11ea-8a49-74045cae448a.png [image: image] https://user-images.githubusercontent.com/25648734/74831621-77d84880-537a-11ea-8e5a-582ecf0127ad.png Video clip of this problem https://drive.google.com/file/d/11ZKeauRabnw_0LeqSprxDQMpzRMOZgPT/view?usp=sharing

Also, in unity the screen for poinclouds are huge and far from the robot. The setting is very different from your excellent pancake demo video on youtube which is actually my target. How can I chang the position of the image screen coz it is not a gameobject in Hierachy [image: image] https://user-images.githubusercontent.com/25648734/74831944-1fee1180-537b-11ea-8e2c-18d8686f3382.png

  1. I do see the point could in Unity. But these points are not coloured and very hard to use them to represent shapes of objects. It just not as good as your point cloud in your demo. I am counting on you could you please help? Thanks a million [image: image] https://user-images.githubusercontent.com/25648734/74832069-5f1c6280-537b-11ea-9de2-d03a3eb4676d.png

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/UM-ARM-Lab/unity_victor_teleop/issues/2?email_source=notifications&email_token=ABXQUINDS77P6TC7QXW6Q3TRDUMYFA5CNFSM4KSPYCQ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEMHQCOY#issuecomment-588185915, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABXQUIPUFSEZZJ3WESYR34TRDUMYFANCNFSM4KSPYCQQ .

-- Brad Saund

stevensu1838 commented 4 years ago

Hi mate, You'be been so helpful. I finally found a same Kinect as you do. Now I am completely following your way. However, I've got a problem set up the Kinect. Could u please take a look at the following question? Maybe you had the same issue before. My question: https://github.com/OpenKinect/libfreenect2/issues/1109 Thank you so much. I am counting on you. You're brilliant. Cheers

bsaund commented 4 years ago

Best of luck, but I have not seen that error

stevensu1838 commented 4 years ago

Hi Buddy, It works now and you are the legend. The problem I had was because of the Graphic card(750)on my Ubuntu PC. I replaced the 750 with a 1050 graphic card. The only issue for now is I am experiencing serious time delay. The 3D scene in VR can be more than 1 minute late than the actual scene. Any idea how I can achieve little time delay as you do in the demo? Cheers

bsaund commented 4 years ago

My guess is your graphics card is not powerful enough to keep up with the rate you are publishing images, so it builds up a backlog. The easy thing to do for now: Publish images from the kinect at a lower frequency.

Longer term, I've added this issue: https://github.com/UM-ARM-Lab/unity_victor_teleop/issues/5

Jason-Hayes commented 3 years ago

Hi mate,

Many people have a RGBD sensor, but only our lab has a "Victor" Robot. I've received several requests for explanations of the kinect data specifically. This could be made into a demo without Victor involved at all, which would make this easier to install on other systems.

I would like to ask you if I want to use my own kinect to implement this project, do I need to use your robot(Victor), or install the corresponding Kinect driver.

bsaund commented 3 years ago

1) You do not need a Kinect specifically. Any depth camera will work as long as it is publishing on the appropriate ROS topics.

2) The unity project is already set up to use my robot (Victor). Recently someone was able to set up a new robot arm with only slight help from me. Start with my unity file, but uncheck all of the "Victor" game objects. Follow the ROS# tutorials, specifically for transferring a URDF from ROS to Unity. Next you will need to recreate the same unity game objects that I have for Victor. This will require a little work in Unity. Take a look at the scripts on the various "Victor" game objects and copy them to your robot's game objects.

bsaund commented 3 years ago

If you have further questions with setting up your robot, go ahead and open a new issue