maxspahn / gym_envs_urdf

URDF environments for gym
https://maxspahn.github.io/gym_envs_urdf/
GNU General Public License v3.0
43 stars 14 forks source link

Use case multi robot carry #124

Open josyulakrishna opened 1 year ago

josyulakrishna commented 1 year ago

I have followed all the instructions and tried to run the code however, I am getting this error

warnings.warn(str(err))
Traceback (most recent call last):
  File "/home/josyula/Programs/MAS_Project/gym_envs_urdf/examples/multi_robot.py", line 51, in <module>
    run_multi_robot(render=True, obstacles=True, goal=True)
  File "/home/josyula/Programs/MAS_Project/gym_envs_urdf/examples/multi_robot.py", line 34, in run_multi_robot
    from examples.scene_objects.goal import dynamicGoal
  File "/home/josyula/Programs/MAS_Project/gym_envs_urdf/examples/scene_objects/goal.py", line 1, in <module>
    from MotionPlanningGoal.staticSubGoal import StaticSubGoal
ModuleNotFoundError: No module named 'MotionPlanningGoal'

I did do a search it looks like there isn't a MotionPlanningGoal in the project. How can I resolve this error? thank you!

josyulakrishna commented 1 year ago

Anyone facing this issue please install this package, https://pypi.org/project/motion-planning-scenes/ thank you.

maxspahn commented 1 year ago

@josyulakrishna Thanks for your interest in the project. And greater to see that you managed to solve the problem yourself as well.

The motion planning scenes are optional. You can install them using pip: pip install "urdenvs[scenes]"

If you use pip on the cloned repository, the syntax would be pip install -e ".[scenes]"

If you use poetry, it is poetry install -E scenes

josyulakrishna commented 1 year ago

Thank you @maxspahn I would need a help/advice, I am planning to build a multiagent payload carrying simulation over the existing multirobot simulation with point robots in it which looks something like this, the number of robots and payload shape can change but for now, I am targeting with the rectangular payload. image

This is simulation rendered with two point robots image

The idea is to train these robots with RL to carry the load to a goal. However, I'm not exactly sure where to start as I'm very new to pybullet and urdf. How do I add this load onto the robots, which would essentially be latched to the robots? can you please give me some suggestions, any help is deeply appreciated. thank you!

maxspahn commented 1 year ago

Hi @josyulakrishna,

Nice use case. Especially, as it will directly use the brand-new feature of multiple robots in the scene. You can add arbitrary urdf-files to the scene, by creating obstacles and specifying the location of the file. If you initially place it on the robots, they will continue 'carrying' the load. I have attached a simple example file . Both files have to be placed in the same folder and you need MotionPlanningEnvs. Note that the physics of the item (block.urdf) are probably not well-defined, but this is not my strong suit. You can play around with the urdf file to see the physics improving.

example_carry_multi_robots.zip

There is only one thing you will have to do: Define an appropriate reward function for this case, probably based on the location of the obstacle. We are currently developing a method to access the position of all obstacles in the scene at runtime, which might help you a lot in doing that. @c-salmi : This might be an interesting use case for the obstacle sensor.

Let me know how it goes, and feel free to open a branch in your fork so we can directly discuss your code (and eventually merge the use case in here as an example :fire: ) In that case, I recommend using the poetry virtual environment and the develop branch.

josyulakrishna commented 1 year ago

Thank you so much @maxspahn I have cloned the repo and will update my progress there.

I am actually looking into making the load latch to the robot, I have modified the pointRobot.urdf which can be seen in this code and the simulation can be run with the point_robot.py in the examples folder, a picture is attached below, this also wouldn't block the lidar's view.

image

Also, I need to have the load latched to the robots, so l did try to make a new urdf file with two robots right now, which would be carrying the load, the file is here however the robot_2's revolute joint is not rotating around the robot's axis. I have tried asking on slack and ros forums, but i had no luck. It'd be really helpful if you can get a chance to take a look at it.

Thank you again for the advice!

maxspahn commented 1 year ago

Hi again,

I have looked at your fork and the corresponding implementations. There are several things I should clarify: 1) If you want to have multiple robots, you should load different urdf file instead of adding all the robots into one urdf file like in https://github.com/josyulakrishna/gym_envs_urdf/blob/master/examples/multiPointRobot.urdf. This makes in more cumbersome to maintain simple urdf-files and harder to debug. 2) I assume you want the load to be 'movable', i.e. it should be able to fall of the robot. However, if you attach the green blank to the robot in the urdf file, it will be statically fixed to it, it becomes part of the robot so to say. Therefore, you should really let the plank fall into the latch of the robot (similar to the example I sent here earlier).

So, given that you have already designed a nice point robot with a latch that does not mess up with the lidar, there is very little left to do. 1) Simply load two of those robots into the environment with code that looks something like

    robots = [
        GenericUrdfReacher(urdf="loadPointRobot.urdf", mode="vel"),
        GenericUrdfReacher(urdf="loadPointRobot.urdf", mode="vel"),
    ]
    env = gym.make(
        "urdf-env-v0",
        dt=0.01, robots=robots, render=render
    )
    action = np.array([0.1, 0.0, 1.0, 0.1, 0.0, 1.0])
    base_pos = np.array(
        [
            [1.0, 0.1, 0.0],
            [1.5, 0.1, 0.0],
        ]
    )
    ob = env.reset(vel=vel0, base_pos=base_pos)

2) Then, you let the green plank fall into the latch at the beginning of the simulation with code that looks something like

    urdf_obstacle_dict = {
        "type": "urdf",
        "geometry": {"position": [0.2, -0.0, 1.05]},
        "urdf": os.path.join(os.path.dirname(__file__), "block.urdf"),
    }
    urdf_obstacle = UrdfObstacle(name="carry_object", content_dict=urdf_obstacle_dict)
    env.add_obstacle(urdf_obstacle)

There you obviously have to play around with the position of the obstacle so that it actually falls into the robot's latch.

I hope this helps a bit.

Let me know if something remains unclear.

Best, Max.

maxspahn commented 1 year ago

By the way: I have changed re-opened the issue and changed its name.

josyulakrishna commented 1 year ago

Thank you @maxspahn, indeed this does give me some ideas! I think i can enlarge the size of the latch and make it a little better for carrying loads. will keep you posted :)

josyulakrishna commented 1 year ago

@maxspahn, Hey Max! hope you're doing well, is there any way I can suppress these kinds of warnings? It seems to be coming from the lidar sensor link.

No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frameb3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:
holder_leftb3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:
No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frameb3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:
holder_topb3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:
No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frameb3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:
lidar_sensor_link_1

Any tips on making simulation faster for training? I have disabled rendering too.

maxspahn commented 1 year ago

Concerning the warnings: I think you could add a mass in the correspond link of the urdf file. It never really bothered me as it is only invoked once.

For speed: I am afraid that the physics engine is the limiting factor here. Unless you are ready to increase the time step, there is not much more speed up to be expected.

We are currently looking into replacing the physics engine with Isaac gym as it allows better parallelization.

josyulakrishna commented 1 year ago

@maxspahn

Resetting the environment removes the robot from the environment keeping only one robot.

For example this simple code

    import time

    import gym
    import os
    import numpy as np
    from urdfenvs.robots.generic_urdf import GenericUrdfReacher
    from MotionPlanningEnv.urdfObstacle import UrdfObstacle
    from MotionPlanningGoal.staticSubGoal import StaticSubGoal
    from urdfenvs.sensors.lidar import Lidar
    goalDict = { "type": "static",
                "desired_position": [0.2, -0.0, 1.05],
                "is_primary_goal": True,
                "indices": [0, 1, 2],
                "parent_link": 0,
                "child_link": 3,
                "epsilon": 0.2,
                "type": "staticSubGoal",
                 }
    goal = StaticSubGoal(name="goal", content_dict=goalDict)

    def create_obstacle():
        """
        You can place any arbitrary urdf file in here.
        The physics of this box is not perfect, but this is not the field of
        my expertise.
        """

        urdf_obstacle_dict = {
            "type": "urdf",
            "geometry": {"position": [0.2, -0.0, 1.05]},
            "urdf": os.path.join(os.path.dirname(__file__), "block.urdf"),
        }
        urdf_obstacle = UrdfObstacle(name="carry_object", content_dict=urdf_obstacle_dict)
        return urdf_obstacle

    def run_multi_robot_carry(n_steps=1000, render=False):
        # Two robots to carry the load
        robots = [
            GenericUrdfReacher(urdf="loadPointRobot.urdf", mode="vel"),
            GenericUrdfReacher(urdf="loadPointRobot.urdf", mode="vel"),
        ]
        env = gym.make("urdf-env-v0", dt=0.1, robots=robots, render=render)
        # Choosing arbitrary actions
        action = [0.1, 0., 0.0, 0.1, 0., 0.0]
        base_pos = np.array(
            [
                [0.0, 1.0, 0.0],
                [0.0, -1.0, 0.0],
            ]
        )
        ob = env.reset(base_pos=base_pos)
        # Placing the obstacle
        # env.add_obstacle(create_obstacle())
        # env.add_goal(goal)
        # lidar = Lidar(4, nb_rays=4, raw_data=False)
        # env.add_sensor(lidar, robot_ids=[0, 1])
        # env.add_walls()
        env.add_stuff()
        history = []
        i = 0
        for _ in range(n_steps):
            i += 1
            print("i ,", i)
            if i % 20 == 0:
                env.reset(base_pos=base_pos)
                env.add_stuff()
            else:
                # WARNING: The reward function is not defined for you case.
                # You will have to do this yourself.
                ob, reward, done, info = env.step(action)
                print("i", i, done)
            # if done:
            #     break
            history.append(ob)
        env.close()

        return history

    if __name__ == "__main__":
        run_multi_robot_carry(render=True)

before reset:

image

After the first reset in the environment, only one robot exists after reset image:

image

I'm not sure why this is happening, I noticed this after running PPO and it seemed to fail consistenly, i noticed env.reset() is called after every episode terminates, which is keeping only one robot in the environment, can you please help regarding this?

All the code I'm using is here https://github.com/josyulakrishna/gym_envs_urdf

Edit: Found a workaround, every time the env.rest(base_pos) is called one can rebuild the environment.

josyulakrishna commented 1 year ago

@maxspahn hey Max, can you please give me an environment with the walls setup like this

https://raw.githubusercontent.com/maxspahn/gym_envs_urdf/master/docs/source/img/pointRobotKeyboardInput.gif

Thank you.

josyulakrishna commented 1 year ago

Anyone interested may like this result, I used MAPPO to get this behavior, will extend to obstacle avoidance(passing through gaps in walls)

https://user-images.githubusercontent.com/2005601/202611517-44f4b8d1-b420-4a6a-a01b-4248bdfc3255.mp4

Capsar commented 1 year ago

The solution of installing the extra package is also a fix for the following:

File "C:\Users\xxx\miniconda3\envs\xxx\lib\site-packages\urdfenvs\urdf_common\urdf_env.py", line 8, in <module> from MotionPlanningEnv.collisionObstacle import CollisionObstacle ModuleNotFoundError: No module named 'MotionPlanningEnv'