ARISE-Initiative / robosuite

robosuite: A Modular Simulation Framework and Benchmark for Robot Learning
https://robosuite.ai
Other
1.23k stars 394 forks source link

collecting demonstrations in the robosuite #441

Open Nimingez opened 8 months ago

Nimingez commented 8 months ago

After collecting demonstrations in the robosuite env, and then converting them using the conversion script provided, there seems to be an issue of action not being in the scale [-1, 1] . I have collected the demonstrations using "collect_human_demonstrations.py". then python conversion/convert_robosuite.py get_dataset_info.py total transitions: 27087 total trajectories: 35 traj length mean: 773.9142857142857 traj length std: 240.8114106727588 traj length min: 362 traj length max: 1490 action min: -7.500000000000007 action max: 7.5

==== Filter Keys ==== filter key train with 32 demos filter key valid with 3 demos

==== Env Meta ==== { "type": 1, "env_name": "Wipe", "env_version": "1.4.1", "env_kwargs": { "env_name": "Wipe", "robots": "UR5e", "controller_configs": { "type": "OSC_POSE", "input_max": 1, "input_min": -1, "output_max": [ 0.05, 0.05, 0.05, 0.5, 0.5, 0.5 ], "output_min": [ -0.05, -0.05, -0.05, -0.5, -0.5, -0.5 ], "kp": 150, "damping_ratio": 1, "impedance_mode": "fixed", "kp_limits": [ 0, 300 ], "damping_ratio_limits": [ 0, 10 ], "position_limits": null, "orientation_limits": null, "uncouple_pos_ori": true, "control_delta": true, "interpolation": null, "ramp_ratio": 0.2 } } }

==== Dataset Structure ==== episode demo_1 with 1129 transitions key: actions with shape (1129, 6) key: states with shape (1129, 13)

Traceback (most recent call last): File "get_dataset_info.py", line 134, in raise Exception("Dataset should have actions in [-1., 1.] but got bounds [{}, {}]".format(action_min, action_max)) Exception: Dataset should have actions in [-1., 1.] but got bounds [-7.500000000000007, 7.5]

the actions not scaled down automatically

Dhanushvarma commented 8 months ago

I think if you clip the actions in the script it will fix the issue, this was also suggested by the authors in one of the previous issues, if I am not wrong.

like so : action = np.clip(action, -1, 1)

Nimingez commented 8 months ago

Have you successfully collected data and used training for robomimic? @Dhanushvarma

Dhanushvarma commented 8 months ago

What I have noticed is that, the keyboard interface is binary, and training with "keyboard-interface" collected data results in poor training. I think to overcome this, the solution is to use a "SpaceMouse" or some alternative interface.

Abhiram824 commented 1 week ago

Hi, @Nimingez did the action clipping suggestion solve the issue?