Abraham190137 / TeleoperationUnity

28 stars 2 forks source link

want to know more details #2

Open loli11111q opened 3 weeks ago

loli11111q commented 3 weeks ago

Hello,I would like to know more details about your project.I have previously replicated the teleoperation method of openteach and i can use gestures to contorl my franka emika panda.However,using gestures is very difficult to complete high-quality demo recordings.Therefore, i would like to try your teleoperation method now.May i ask if the franka panda can be controlled through the meta quest handle in your method?Because i think quest handle control is more stable than gesture control.Also,may i ask if your operation will be difficult?Do you have any demos or similar that you can show me? I would like to learn more about your project.I hope you can provide me with more details,thank you very much.

Abraham190137 commented 3 weeks ago

Hello, thanks for your interest! To answer your questions,

  1. Yes, the Franka Panda can be controlled through the meta quest handle. The pose of the controller (position + rotation), along with the goal gripper width, is sent to your control script on your PC, where that goal position can be passed into whatever controller you use.
  2. The teleoperation isn't difficult to use and isn't too bad to set up. The hard part is detecting objects in the scene and passing them to the VR headset for rendering. However, if you don't want to mess with that an easy workaround is to directly observe the environment, wearing the Oculus on your forehead and just looking at the robot.
  3. For demos, this video https://www.youtube.com/watch?v=l1LYNeVtkM8&t=3s shows demos of both the controller teleoperation and the hand teleoperation (the code supports both). The first half of the video is controller-based teleoperation.

A brief note, I set this up around a year ago using the Oculus Quest 2. Since then, the Quest 3 has come out. I believe everything should work fine with the Quest 3 if you want to try that, but I haven't tested it. Also, the SDKs have been updated since then. We recently tried installing everything from scratch and it worked fine, but be aware.

If you have any other questions, don't hesitate to reach out! Good luck!

loli11111q commented 3 weeks ago

Thank you very much for your reply. I have a few questions to ask. I have seen your plan, which is to take the object identification and reconstruction approach. You need to model the real object, which requires multiple realsense cameras. However, I only have one realsense camera now. How should I proceed with your plan, as I mainly want to achieve teleoperation and collect data,and do not care about anything else? Can I ignore this part and not model the real object? Can this smoothly reproduce your teleoperation?

By the way, is your project installed on one computer or on two computers (nuc+desktop)? If it is two computers, where is your Unity engine installed? Is it the host where Frankapy is located or the host where Frankaininterface is located? thanks.

Looking forward to your reply.

Abraham190137 commented 3 weeks ago

Hello, As for your first question, that depends on how you want to observe the scene during teleoperation. Do you need to render the scene in VR, or can you directly look at the robot in order to control it? If you can directly observe the robot, then using the workaround I mentioned before (wearing the Oculus on your forehead and directly looking at the robot) may be a good option for you. If you do need to have the object shown in VR, then one RealSense camera could work fine, since it is a depth camera so you can get the full pose of your object from it, although you may run into issues with occlusion if there is anything block that cameras view.

As for performance, ignoring the object rendering won't have any effect on performance, since it means there are fewer objects for your system to track and the Oculus to render. I actually recommend this approach for the initial setup/testing.

In terms of setup, we currently use two computers, one to run Frankapy and one to run Frankainterface, although before we had used one for both - we made the switch because the real-time kernel needed for Franka Interface did not play nicely with the Nvidia drivers we needed to run ML models for a different project. This consideration, however, is tangential to the teleoperation discussion. For teleoperation, there are three devices to consider - the Oculus, the computer that communicates with the Oculus, and the machine that runs Unity for developing the Oculus app.

Starting with the machine that runs Unity, this can be anything. Unity is NOT run during deployment. It is only used to compile the teleoperation Oculus app which is then pushed to the Oculus and run on the Oculus. So, use whatever machine is most convenient for you.

During operation, the Oculus communicates with a computer, which I will refer to as the Control PC. The control PC's job is to get the current control command from the Oculus, in the form of a goal pose, and send that goal pose to the robot to execute. The Control PC also needs to send back to the Oculus the current observation of the scene, which consists of the robot's current pose, along with information on any objects in the scene you wish to render (if you don't want to render anything, you can leave this part blank). Note: Pose here refers to the position and rotation of the robot's end-effector, along with the gripper width, if using a parallel plate gripper.

To complete this task, the control PC needs access to the following:

If you are using FrankaPy for control, then the easiest way to meet these requirements it to use whatever machine is running FrankaPy as the ControlPC, using FrankaPy to run the goto_pose and get_pose commands. But, any setup that allows the ControlPC to send a goal pose to the robot controller and get the robot's current pose will work.