AndrejOrsula / drl_grasping

Deep Reinforcement Learning for Robotic Grasping from Octrees
https://arxiv.org/pdf/2208.00818
BSD 3-Clause "New" or "Revised" License
405 stars 54 forks source link

Add instructions for examples in README #78

Closed lorepieri8 closed 3 years ago

lorepieri8 commented 3 years ago

Is there is any runnable example at the moment? If so, it would be great to get instructions for how to run it. I didn't manage to run something similar to the animated GIF in the README so far.

For instance running bash ex_enjoy.bash I got:

ex_enjoy.bash: line 75: /root/drl_grasping/repos/drl_grasping/src/drl_grasping/examples/enjoy.py: No such file or directory

I'm using the docker build.

AndrejOrsula commented 3 years ago

See README.md instead, the information here might be outdated.

--

Hello,

This repository is currently structured as a ROS 2 project, so executables (including examples) can be run with the use of ros2 run drl_grasping <executable_name>, e.g. ros2 run drl_grasping enjoy.bash. This is a list of all current executables (terminal auto complete):

andrej@P5550:~$ ros2 run drl_grasping 
dataset_download_test.bash     dataset_unset_test.bash        ex_optimize.bash               process_collection.py          
dataset_download_train.bash    dataset_unset_train.bash       ex_preload_replay_buffer.bash  test_env.py                    
dataset_set_test.bash          enjoy.py                       ex_train.bash                  test_octree_conv.py            
dataset_set_train.bash         ex_enjoy.bash                  preload_replay_buffer.py       train.py

That said, bash ex_enjoy.bash should also work. I will try to fix it soon.


This project is still WIP, so you unfortunately cannot use ros2 run drl_grasping ex_enjoy.bash/ros2 run drl_grasping enjoy.py directly as it requires an already trained agent that can be loaded and "enjoyed". I am planning to add some pre-trained agents to the repository in 2-3 weeks (https://github.com/AndrejOrsula/drl_grasping/issues/70), which can then be used directly.

For now, you need to train your own agent from scratch (with ex_train.bash/train.py). I should warn you that this can take a lot of time (days) to get decent results for the full grasp environment (I am still working on hyperparameter tuning).

  1. First, I suggest you to download a dataset of models.

    ros2 run drl_grasping dataset_download_train.bash

    Alternatively, you can train on geometric primitives by setting object_random_use_mesh_models hyperparameter of randomizer to False

  2. (Optional) Modify hyperparameters/environment config. The latest hyperparemeters for Grasp task I tried are under hyperparameter_tuning branch (approaching 50% success after ~300k timesteps).

  3. Train - I recommend you to use ex_train.bash and modify the script itself if you want (environment, algorithm, seed, ...).

    ros2 run drl_grasping ex_train.bash

    To see what is going on, I recommend running rviz2 and then loading config from this repo. You can try ign gazebo -g too, but that might not function properly.

Alternatively, you can take a closer look at what ex_train.bash does and run the script individually with all necessary arguments from CLI.

# Setup MoveIt2 and Ignition <-> ROS 2 bridges
ros2 launch drl_grasping ign_moveit2.launch.py
# You can also use headless mode (without rviz2)
# ros2 launch drl_grasping ign_moveit2_headless.launch.py
# Run the training script itself (which is based on https://github.com/DLR-RM/rl-baselines3-zoo)
ros2 launch drl_grasping train.py --env "Grasp-OctreeWithColor-Gazebo-v0" --algo "tqc" --arg_n ...
  1. Now you can enjoy the agent with ex_enjoy.bash

I apologise for the lacking instructions, but I am currently a bit busy with other things. The first release will be at the beginning of June so it will hopefully improve by then. After that release, I will probably refactor the project and clean up all research/thesis related code so that it is easier to use for other people.

AndrejOrsula commented 3 years ago

@lorepieri8 I have fixed the issue you were having some time ago. I also added pre-trained agents and now I updated the documentation with better description of examples. Hope that it helps.