Skylark0924 / Rofunc

๐Ÿค– The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation
https://rofunc.readthedocs.io
Apache License 2.0
438 stars 45 forks source link
forward-kinematics humanoid humanoid-robots imitation-learning inverse-kinematics isaac-gym isaac-lab isaac-sim learning-from-demonstration manipulability optitrack planning-algorithms reinforcement-learning-algorithms robot robot-control robot-large-model robot-learning robot-manipulation robot-planning xsens


Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation

Release License Documentation Status Build Status

Repository address: https://github.com/Skylark0924/Rofunc
Documentation: https://rofunc.readthedocs.io/

Rofunc package focuses on the Imitation Learning (IL), Reinforcement Learning (RL) and Learning from Demonstration (LfD) for (Humanoid) Robot Manipulation. It provides valuable and convenient python functions, including demonstration collection, data pre-processing, LfD algorithms, planning, and control methods. We also provide an IsaacGym and OmniIsaacGym based robot simulator for evaluation. This package aims to advance the field by building a full-process toolkit and validation platform that simplifies and standardizes the process of demonstration data collection, processing, learning, and its deployment on robots.

Citation

If you use rofunc in a scientific publication, we would appreciate citations to the following paper:

@software{liu2023rofunc,ย 
          title = {Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation},
          author = {Liu, Junjia and Dong, Zhipeng and Li, Chenzui and Li, Zhihao and Yu, Minghao and Delehelle, Donatien and Chen, Fei},
          year = {2023},
          publisher = {Zenodo},
          doi = {10.5281/zenodo.10016946},
          url = {https://doi.org/10.5281/zenodo.10016946},
          dimensions = {true},
          google_scholar_id = {0EnyYjriUFMC},
}

[!WARNING] If our code is found to be used in a published paper without proper citation, we reserve the right to address this issue formally by contacting the editor to report potential academic misconduct!

ๅฆ‚ๆžœๆˆ‘ไปฌ็š„ไปฃ็ ่ขซๅ‘็Žฐ็”จไบŽๅทฒๅ‘่กจ็š„่ฎบๆ–‡่€Œๆฒกๆœ‰่ขซๆฐๅฝ“ๅผ•็”จ๏ผŒๆˆ‘ไปฌไฟ็•™้€š่ฟ‡ๆญฃๅผ่”็ณป็ผ–่พ‘ๆŠฅๅ‘Šๆฝœๅœจๅญฆๆœฏไธ็ซฏ่กŒไธบ็š„ๆƒๅˆฉใ€‚

Update News ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰

v0.0.2.6 Support dexterous grasping and human-humanoid robot skill transfer

Installation

Please refer to the installation guide.

Documentation

Documentation Example Gallery

To give you a quick overview of the pipeline of rofunc, we provide an interesting example of learning to play Taichi from human demonstration. You can find it in the Quick start section of the documentation.

The available functions and plans can be found as follows. > **Note** > โœ…: Achieved ๐Ÿ”ƒ: Reformatting โ›”: TODO | Data | | Learning | | P&C | | Tools | | Simulator | | |:-------------------------------------------------------------------------------------------------------:|---|:------------------------------------------------------------------------------------------------------:|----|:------------------------------------------------------------------------------------------------------------------:|----|:-------------------------------------------------------------------------------------------------------------------:|---|:------------------------------------------------------------------------------------------------------------:|----| | [`xsens.record`](https://rofunc.readthedocs.io/en/latest/devices/xsens.html) | โœ… | `DMP` | โ›” | [`LQT`](https://rofunc.readthedocs.io/en/latest/planning/lqt.html) | โœ… | `config` | โœ… | [`Franka`](https://rofunc.readthedocs.io/en/latest/simulator/franka.html) | โœ… | | [`xsens.export`](https://rofunc.readthedocs.io/en/latest/devices/xsens.html) | โœ… | [`GMR`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.gmr.html) | โœ… | [`LQTBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqt.lqt.html) | โœ… | [`logger`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.logger.beauty_logger.html) | โœ… | [`CURI`](https://rofunc.readthedocs.io/en/latest/simulator/curi.html) | โœ… | | [`xsens.visual`](https://rofunc.readthedocs.io/en/latest/devices/xsens.html) | โœ… | [`TPGMM`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmm.html) | โœ… | [`LQTFb`](https://rofunc.readthedocs.io/en/latest/planning/lqt_fb.html) | โœ… | [`datalab`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.datalab.html) | โœ… | `CURIMini` | ๐Ÿ”ƒ | | [`opti.record`](https://rofunc.readthedocs.io/en/latest/devices/optitrack.html) | โœ… | [`TPGMMBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmm.html) | โœ… | [`LQTCP`](https://rofunc.readthedocs.io/en/latest/planning/lqt_cp.html) | โœ… | [`robolab.coord`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.robolab.coord.transform.html) | โœ… | [`CURISoftHand`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.simulator.curi_sim.html) | โœ… | | [`opti.export`](https://rofunc.readthedocs.io/en/latest/devices/optitrack.html) | โœ… | [`TPGMM_RPCtl`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmm.html) | โœ… | [`LQTCPDMP`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqt.lqt_cp_dmp.html) | โœ… | [`robolab.fk`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.robolab.kinematics.fk.html) | โœ… | [`Walker`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.simulator.walker_sim.html) | โœ… | | [`opti.visual`](https://rofunc.readthedocs.io/en/latest/devices/optitrack.html) | โœ… | [`TPGMM_RPRepr`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmm.html) | โœ… | [`LQR`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.lqr.html) | โœ… | [`robolab.ik`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.robolab.kinematics.ik.html) | โœ… | `Gluon` | ๐Ÿ”ƒ | | [`zed.record`](https://rofunc.readthedocs.io/en/latest/devices/zed.html) | โœ… | [`TPGMR`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmr.html) | โœ… | [`PoGLQRBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.lqr.html) | โœ… | `robolab.fd` | โ›” | `Baxter` | ๐Ÿ”ƒ | | [`zed.export`](https://rofunc.readthedocs.io/en/latest/devices/zed.html) | โœ… | [`TPGMRBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tpgmr.html) | โœ… | [`iLQR`](https://rofunc.readthedocs.io/en/latest/planning/ilqr.html) | ๐Ÿ”ƒ | `robolab.id` | โ›” | `Sawyer` | ๐Ÿ”ƒ | | [`zed.visual`](https://rofunc.readthedocs.io/en/latest/devices/zed.html) | โœ… | [`TPHSMM`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.learning.ml.tphsmm.html) | โœ… | [`iLQRBi`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.ilqr_bi.html) | ๐Ÿ”ƒ | [`visualab.dist`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.visualab.distribution.html) | โœ… | [`Humanoid`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.simulator.humanoid_sim.html) | โœ… | | [`emg.record`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.devices.emg.record.html) | โœ… | [`RLBaseLine(SKRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RLBaseLine/SKRL.html) | โœ… | `iLQRFb` | ๐Ÿ”ƒ | [`visualab.ellip`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.visualab.ellipsoid.html) | โœ… | [`Multi-Robot`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.simulator.multirobot_sim.html) | โœ… | | [`emg.export`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.devices.emg.export.html) | โœ… | `RLBaseLine(RLlib)` | โœ… | [`iLQRCP`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.ilqr_cp.html) | ๐Ÿ”ƒ | [`visualab.traj`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.visualab.trajectory.html) | โœ… | | | | `mmodal.record` | โ›” | `RLBaseLine(ElegRL)` | โœ… | [`iLQRDyna`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.ilqr_dyna.html) | ๐Ÿ”ƒ | [`oslab.dir_proc`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.oslab.dir_process.html) | โœ… | | | | [`mmodal.sync`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.devices.mmodal.sync.html) | โœ… | `BCO(RofuncIL)` | ๐Ÿ”ƒ | [`iLQRObs`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.planning_control.lqr.ilqr_obstacle.html) | ๐Ÿ”ƒ | [`oslab.file_proc`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.oslab.file_process.html) | โœ… | | | | | | `BC-Z(RofuncIL)` | โ›” | `MPC` | โ›” | [`oslab.internet`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.oslab.internet.html) | โœ… | | | | | | `STrans(RofuncIL)` | โ›” | `RMP` | โ›” | [`oslab.path`](https://rofunc.readthedocs.io/en/latest/apidocs/rofunc/rofunc.utils.oslab.path.html) | โœ… | | | | | | `RT-1(RofuncIL)` | โ›” | | | | | | | | | | [`A2C(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/A2C.html) | โœ… | | | | | | | | | | [`PPO(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/PPO.html) | โœ… | | | | | | | | | | [`SAC(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/SAC.html) | โœ… | | | | | | | | | | [`TD3(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/TD3.html) | โœ… | | | | | | | | | | `CQL(RofuncRL)` | โ›” | | | | | | | | | | `TD3BC(RofuncRL)` | โ›” | | | | | | | | | | [`DTrans(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/DTrans.html) | โœ… | | | | | | | | | | `EDAC(RofuncRL)` | โ›” | | | | | | | | | | [`AMP(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/AMP.html) | โœ… | | | | | | | | | | [`ASE(RofuncRL)`](https://rofunc.readthedocs.io/en/latest/lfd/RofuncRL/ASE.html) | โœ… | | | | | | | | | | `ODTrans(RofuncRL)` | โ›” | | | | | | |

RofuncRL

RofuncRL is one of the most important sub-packages of Rofunc. It is a modular easy-to-use Reinforcement Learning sub-package designed for Robot Learning tasks. It has been tested with simulators like OpenAIGym, IsaacGym, OmniIsaacGym (see example gallery), and also differentiable simulators like PlasticineLab and DiffCloth. Here is a list of robot tasks trained by RofuncRL:

Note\ You can customize your own project based on RofuncRL by following the RofuncRL customize tutorial.\ We also provide a RofuncRL-based repository template to generate your own repository following the RofuncRL structure by one click.\ For more details, please check the documentation for RofuncRL.

The list of all supported tasks. | Tasks | Animation | Performance | [ModelZoo](https://github.com/Skylark0924/Rofunc/blob/main/rofunc/config/learning/model_zoo.json) | |-------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|---------------------------------------------------------------------------------------------------| | Ant | ![](doc/img/task_gifs/AntRofuncRLPPO.gif) | | โœ… | | Cartpole | | | | | Franka
Cabinet | ![](doc/img/task_gifs/FrankaCabinetRofuncRLPPO.gif) | | โœ… | | Franka
CubeStack | | | | | CURI
Cabinet | ![](doc/img/task_gifs/CURICabinetRofuncRLPPO.gif) | | โœ… | | CURI
CabinetImage | ![](doc/img/task_gifs/CURICabinetRofuncRLPPO.gif) | | | | CURI
CabinetBimanual | | | | | CURIQbSoftHand
SynergyGrasp | | | โœ… | | Humanoid | ![](doc/img/task_gifs/HumanoidRofuncRLPPO.gif) | | โœ… | | HumanoidAMP
Backflip | ![](doc/img/task_gifs/HumanoidFlipRofuncRLAMP.gif) | | โœ… | | HumanoidAMP
Walk | | | โœ… | | HumanoidAMP
Run | ![](doc/img/task_gifs/HumanoidRunRofuncRLAMP.gif) | | โœ… | | HumanoidAMP
Dance | ![](doc/img/task_gifs/HumanoidDanceRofuncRLAMP.gif) | | โœ… | | HumanoidAMP
Hop | ![](doc/img/task_gifs/HumanoidHopRofuncRLAMP.gif) | | โœ… | | HumanoidASE
GetupSwordShield | ![](doc/img/task_gifs/HumanoidASEGetupSwordShieldRofuncRLASE.gif) | | โœ… | | HumanoidASE
PerturbSwordShield | ![](doc/img/task_gifs/HumanoidASEPerturbSwordShieldRofuncRLASE.gif) | | โœ… | | HumanoidASE
HeadingSwordShield | ![](doc/img/task_gifs/HumanoidASEHeadingSwordShieldRofuncRLASE.gif) | | โœ… | | HumanoidASE
LocationSwordShield | ![](doc/img/task_gifs/HumanoidASELocationSwordShieldRofuncRLASE.gif) | | โœ… | | HumanoidASE
ReachSwordShield | | | โœ… | | HumanoidASE
StrikeSwordShield | ![](doc/img/task_gifs/HumanoidASEStrikeSwordShieldRofuncRLASE.gif) | | โœ… | | BiShadowHand
BlockStack | ![](doc/img/task_gifs/BiShadowHandBlockStackRofuncRLPPO.gif) | | โœ… | | BiShadowHand
BottleCap | ![](doc/img/task_gifs/BiShadowHandBottleCapRofuncRLPPO.gif) | | โœ… | | BiShadowHand
CatchAbreast | ![](doc/img/task_gifs/BiShadowHandCatchAbreastRofuncRLPPO.gif) | | โœ… | | BiShadowHand
CatchOver2Underarm | ![](doc/img/task_gifs/BiShadowHandCatchOver2UnderarmRofuncRLPPO.gif) | | โœ… | | BiShadowHand
CatchUnderarm | ![](doc/img/task_gifs/BiShadowHandCatchUnderarmRofuncRLPPO.gif) | | โœ… | | BiShadowHand
DoorOpenInward | ![](doc/img/task_gifs/BiShadowHandDoorOpenInwardRofuncRLPPO.gif) | | โœ… | | BiShadowHand
DoorOpenOutward | ![](doc/img/task_gifs/BiShadowHandDoorOpenOutwardRofuncRLPPO.gif) | | โœ… | | BiShadowHand
DoorCloseInward | ![](doc/img/task_gifs/BiShadowHandDoorCloseInwardRofuncRLPPO.gif) | | โœ… | | BiShadowHand
DoorCloseOutward | ![](doc/img/task_gifs/BiShadowHandDoorCloseOutwardRofuncRLPPO.gif) | | โœ… | | BiShadowHand
GraspAndPlace | ![](doc/img/task_gifs/BiShadowHandGraspAndPlaceRofuncRLPPO.gif) | | โœ… | | BiShadowHand
LiftUnderarm | ![](doc/img/task_gifs/BiShadowHandLiftUnderarmRofuncRLPPO.gif) | | โœ… | | BiShadowHand
HandOver | ![](doc/img/task_gifs/BiShadowHandOverRofuncRLPPO.gif) | | โœ… | | BiShadowHand
Pen | ![](doc/img/task_gifs/BiShadowHandPenRofuncRLPPO.gif) | | โœ… | | BiShadowHand
PointCloud | | | | | BiShadowHand
PushBlock | ![](doc/img/task_gifs/BiShadowHandPushBlockRofuncRLPPO.gif) | | โœ… | | BiShadowHand
ReOrientation | ![](doc/img/task_gifs/BiShadowHandReOrientationRofuncRLPPO.gif) | | โœ… | | BiShadowHand
Scissors | ![](doc/img/task_gifs/BiShadowHandScissorsRofuncRLPPO.gif) | | โœ… | | BiShadowHand
SwingCup | ![](doc/img/task_gifs/BiShadowHandSwingCupRofuncRLPPO.gif) | | โœ… | | BiShadowHand
Switch | ![](doc/img/task_gifs/BiShadowHandSwitchRofuncRLPPO.gif) | | โœ… | | BiShadowHand
TwoCatchUnderarm | ![](doc/img/task_gifs/BiShadowHandTwoCatchUnderarmRofuncRLPPO.gif) | | โœ… |

Star History

Star History Chart

Related Papersย ย ย ย ย ย ย ย ย ย 

  1. Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects (IEEE RA-L 2022 | Code)
@article{liu2022robot,
         title={Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects},
         author={Liu, Junjia and Chen, Yiting and Dong, Zhipeng and Wang, Shixiong and Calinon, Sylvain and Li, Miao and Chen, Fei},
         journal={IEEE Robotics and Automation Letters},
         volume={7},
         number={2},
         pages={5159--5166},
         year={2022},
         publisher={IEEE}
}
  1. SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph Transformer (IROS 2023๏ฝœCode coming soon)
@inproceedings{liu2023softgpt,
               title={Softgpt: Learn goal-oriented soft object manipulation skills by generative pre-trained heterogeneous graph transformer},
               author={Liu, Junjia and Li, Zhihao and Lin, Wanyu and Calinon, Sylvain and Tan, Kay Chen and Chen, Fei},
               booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
               pages={4920--4925},
               year={2023},
               organization={IEEE}
}
  1. BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration (IEEE CDC 2023 | Code)
@inproceedings{liu2023birp,
               title={Birp: Learning robot generalized bimanual coordination using relative parameterization method on human demonstration},
               author={Liu, Junjia and Sim, Hengyi and Li, Chenzui and Tan, Kay Chen and Chen, Fei},
               booktitle={2023 62nd IEEE Conference on Decision and Control (CDC)},
               pages={8300--8305},
               year={2023},
               organization={IEEE}
}

The Team

Rofunc is developed and maintained by the CLOVER Lab (Collaborative and Versatile Robots Laboratory), CUHK.

Acknowledge

We would like to acknowledge the following projects:

Learning from Demonstration

  1. pbdlib
  2. Ray RLlib
  3. ElegantRL
  4. SKRL
  5. DexterousHands

Planning and Control

  1. Robotics codes from scratch (RCFS)