Open michel-aractingi opened 1 week ago
Thanks @marinabar who tested the script on her setup.
I'm noticing quite a bit of the new scripts could be DRY'ed up, since it's rehashing a fair bit of the original control_robot.
I'm curious- what issues are you finding with using control_robot
with simulated environments? Maybe there are ways of improving control_robot
so it better handles abstractions like simulation. That way there's less code to maintain :grin:
Hey @apockill! You're right, we might be able to find a general solution in control_robot.py
, but I feel there are few elements that could make the script ugly.
env
vs robot
. In control_robot.py
reading and writing to the robot is done using only the Robot
class. In simulation, we would require an additional environment instance along with the robot that has to also be passed to all the functions. We could maybe modify lerobot/common/robot_devices/control_utils.py
for instance and put ifs and elses everywhere to account for that but I think it would add an unnecessary complexity. fps
on the real system vs. in simulation is different. So even though the two scripts resemble each other I still think it is cleaner to have them separate. What do you think? If you have some vision of how we can improve on that or merge the two scripts I would be happy to chat or have a look :D
What this does
Adds a script
control_sim_robot.py
inlerobot/scripts
that has the same functionality and interface ascontrol_robot.py
but for simulated environments.The script has three control modes:
--repo-id
option.The dataset created contains more columns related to reinforcement learning like
next.reward
,next.success
andseed
.Simulation environments
Along with the
--robot-path
argument, the scripts requires a path the configuration file of the simulation environment -- define inlerobot/configs/env
. Example of the configuration file for gym_lowcostrobot:Essential elements:
How to test
First install the gym_lowcostrobot environment and add the environment's config file in
yaml
format.Test teleoperation:
Test data collection and upload to hub:
Replay the episodes:
In the script we save the
seed
in the dataset which enables us to reset the environment in the same state when the data collection was happening which makes the replay successful.Finally visualize the dataset:
TODO:
[ ] Test with more simulation environments, brax, maniskill, IsaacLab ...
[ ] Add keyboard control of the end-effector.
Note: You might need to run
mjpython
if you're using MAC.Note: The current script requires a real leader in order to teleoperate sim environments. We can add support for keyboard control of the end effector for people who don't have the real robot.