isaac-sim / IsaacLab

Unified framework for robot learning built on NVIDIA Isaac Sim
https://isaac-sim.github.io/IsaacLab
Other
1.85k stars 710 forks source link

[Bug Report] running robot_assembler.py in IsaacLab #705

Closed 17enunez closed 1 month ago

17enunez commented 1 month ago

[bug] /isaac-sim/exts/omni.isaac.robot_assembler/omni/isaac/robot_assembler/robot_assembler.py

Describe the bug

TypeERROR: can't convert cuda device type tensor to numpy. Use tensor.cpu() to copy the tensor to host memory first.

When trying to run IsaacSim's robot_assembler.assemble_articulations() in an IsaacLab script we were required to change a GPU tensor to a numpy array. We traced the error to the robot_assembler.py file. The _move_obj_b_to_local_pos function grabs orientation and translation data in a cuda tensor and tries to use it in a function that requires a numpy array. We found a fix and describe the changes below.

Steps to reproduce

from omni.isaac.core.utils.extensions import enable_extension enable_extension("omni.isaac.robot_assembler") from omni.isaac.robot_assembler import RobotAssembler, AssembledRobot

define base_robot_path, attach_robot_path, base_robot_mount_frame, attach_robot_mount_frame, fixed_joint_offset, fixed_joint_orient, single_robot=single_robot

robot_assembler = RobotAssembler() assembled_robot = robot_assembler.assemble_articulations( base_robot_path, attach_robot_path, base_robot_mount_frame, attach_robot_mount_frame, fixed_joint_offset, fixed_joint_orient, mask_all_collisions = True, single_robot=single_robot )

ERROR in line assembled_robot = robot_assembler.assemble_articulations( TypeERROR: can't convert cuda device type tensor to numpy. Use tensor.cpu() to copy the tensor to host memory first.

Traced error to this function within robot_assembler.py:

ERROR in: def _move_obj_b_to_local_pos(base_mount_path, attach_path, attach_mount_path, rel_offset, rel_orient):

FIX within robot_assembler.py:

def _move_obj_b_to_local_pos(base_mount_path, attach_path, attach_mount_path, rel_offset, rel_orient) :

Get the position of base mount path as

import pdb pdb. set_trace()

in this line, a_trans and a_orient are cuda tensors to be used on a GPU

a_trans, a_orient - XFormPrim(base_mount_path) . get_world_pose ()

our fix

a_trans = np.asarray(a_trans.cpu()) a orient = np.asarray(a_orient.cpu())

quats to rot_matrices function needs a numpy array with CPU function

a rot = quats to rot_matrices (a orient) rel_rot = quats_to_rot_matrices (rel_orient)

Additionally, the same data type error was found in same file here:

grabs cuda tensor

t_bc, q_bc - XFormPrin(attach_mount_path) .get_local_pose()

fix (cuda --> numpy)

t_bc = np.asarray(t_bc.cpu()) q_be = np.asarray (q_bc.cpu())

quats_to_rot_matrices now has numpy array to use

r_bc = quats_to_rot_matrices(q_bc)

TypeERROR: can't convert cuda device type tensor to numpy. Use tensor.cpu() to copy the tensor to host memory first.

System Info

Describe the characteristic of your environment:

Additional context

The IsaacLab file running is an altered version of arms.py that imports and assembles the Robotiq 2F85 gripper and Kinova Gen3 arm.

Checklist

Acceptance Criteria

Add the criteria for which this task is considered done. If not known at issue creation time, you can add this once the issue is assigned.

SantiDiazC commented 1 month ago

Hi @17enunez , That issue and a temporaly solution to that is mentioned in this issue: #405, there also a solution came to use the extension with the DirectRLEnv class

Mayankm96 commented 1 month ago

Closing this issue in favor of #405. Let's keep the discussion at one place :)