When using a depth camera for real-world robotic arm trajectory planning, the voxels in Isaac Sim are causing the simulated robotic arm to be unable to perform motion planning. #323
Please provide the below information in addition to your issue:
cuRobo installation mode (choose from [python, isaac sim, docker python, docker isaac sim]): isaac sim
python version: 3.10
Isaac Sim version (if using): 2023.1.0
Hello there, I'm interested in understanding how to achieve the effect showcased on the homepage using a depth camera in conjunction with a physical robotic arm(The link is provided below). Here's my current situation:
I've successfully imported my robotic arm into cuRobo following provided tutorials and have run all the test files without issues.
My goal is to replicate the demonstrated functionality where the real-world environment, including the desk and robotic arm, is accurately reflected within the simulator for collision-free motion planning.
To achieve this, I've set up a physical desk, robotic arm, and a depth camera. Within cuRobo, I've loaded the robotic arm and positioned a virtual camera to match the real-world setup. Utilizing nvblox, I've attempted to map the real desktop into the simulator. However, I've encountered an issue: the simulated voxels representing the desk, as captured by the depth camera, are obstructing the motion planning process for the simulated robotic arm.
Could you guide me towards a solution for this obstacle? Alternatively, could you explain how the effect in question is typically achieved, ensuring that the imported real-world elements facilitate rather than hinder motion planning within the simulation?
Please provide the below information in addition to your issue:
Hello there, I'm interested in understanding how to achieve the effect showcased on the homepage using a depth camera in conjunction with a physical robotic arm(The link is provided below). Here's my current situation:
https://curobo.org/videos/dynamic_franka.mp4
I've successfully imported my robotic arm into cuRobo following provided tutorials and have run all the test files without issues. My goal is to replicate the demonstrated functionality where the real-world environment, including the desk and robotic arm, is accurately reflected within the simulator for collision-free motion planning. To achieve this, I've set up a physical desk, robotic arm, and a depth camera. Within cuRobo, I've loaded the robotic arm and positioned a virtual camera to match the real-world setup. Utilizing nvblox, I've attempted to map the real desktop into the simulator. However, I've encountered an issue: the simulated voxels representing the desk, as captured by the depth camera, are obstructing the motion planning process for the simulated robotic arm.
Could you guide me towards a solution for this obstacle? Alternatively, could you explain how the effect in question is typically achieved, ensuring that the imported real-world elements facilitate rather than hinder motion planning within the simulation?
Many thanks in advance for your assistance!