Open mmurray opened 1 month ago
Hello @mmurray , we are currently working on adding control_sim_robot.py
;)
I let @michel-aractingi comment if he has time.
For lerobot/aloha_sim_transfer_cube_human
you might find more info in the original aloha paper: https://arxiv.org/abs/2304.13705
Best
Hello @mmurray
You can find the control_sim_robot.py
script in this branch. Just keep in mind that this is not the final version and plenty of things will change especially with the new refactoring of control_robot.py
. I have also only tested it with the mujoco environment in gym_lowcostrobot
The usage is exactly the same as control_robot.py
. You only need to define a sim config yaml file in lerobot/configs/env
.
Here's an example of the one I am using with the gym_lowcostrobot
.
# @package _global_
fps: 50
env:
name: lowcostrobot
fps: ${fps}
handle: PushCubeLoop-v0
gym:
render_mode: human
max_episode_steps: 100000
calibration:
axis_directions: [-1, -1, 1, -1, -1, -1]
offsets: [0, -0.5, -0.5, 0, -0.5, 0] # factor of pi
eval:
use_async_envs: false
Hello,
Can you provide info on how human supervision was provided for the simulated datasets (e.g.
lerobot/aloha_sim_transfer_cube_human
)? I am starting to setup a similar MuJoCo gym environment for the Stretch (https://github.com/mmurray/gym-stretch) and I would like to collect/train on some human teleop data, but it seems like the currentcontrol_robot.py
script and data collection examples are setup only for physical robots. Is there a branch somewhere with the code used to collectlerobot/aloha_sim_transfer_cube_human
that I can reference?Thanks!