maxspahn / gym_envs_urdf

URDF environments for gym
https://maxspahn.github.io/gym_envs_urdf/
GNU General Public License v3.0
46 stars 14 forks source link

Ft gymnasium migration #196

Closed maxspahn closed 1 year ago

maxspahn commented 1 year ago

Migrating to gymnasium, addresses #192 Have a look, @behradkhadem. Might also solves issues mentioned in #190.

I had a nice morning in the park and realized the migration is rather straight-forward.

behradkhadem commented 1 year ago

Nice job! I myself was working on this issue too (here) and I encountered the same error that your version has. While running env_checker.py I get this error:

Traceback (most recent call last):
  File "/home/behradx/anaconda3/envs/SB3/lib/python3.9/site-packages/stable_baselines3/common/env_checker.py", line 402, in check_env
    env.reset(seed=0)
  File "/home/behradx/anaconda3/envs/SB3/lib/python3.9/site-packages/gymnasium/core.py", line 462, in reset
    obs, info = self.env.reset(seed=seed, options=options)
TypeError: reset() got an unexpected keyword argument 'options'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/behradx/projects/gym_envs_urdf/examples/reinforcement learning/env_checker.py", line 76, in <module>
    check_env(env, warn=True)
  File "/home/behradx/anaconda3/envs/SB3/lib/python3.9/site-packages/stable_baselines3/common/env_checker.py", line 404, in check_env
    raise TypeError("The reset() method must accept a `seed` parameter") from e
TypeError: The reset() method must accept a `seed` parameter

In my version, I added seed parameter to the reset method but still got the same error. I couldn't pinpoint the source of the issue. I also had a problem with nested Dict for observation. It seems that your implementation doesn't have this problem.

PS: Stable baselines 3 for gymnasium could be installed via pip install "stable_baselines3[extra]>=2.0.0a9".

behradkhadem commented 1 year ago

Solved it! I didn't have permission to push code to upstream, so I write the solution here. Just replace reset method with this:

    def reset(
        self,
        seed: int = None,
        options = None,
        pos: np.ndarray = None,
        vel: np.ndarray = None,
        mount_positions: np.ndarray = None,
        mount_orientations: np.ndarray = None,
    ) -> tuple:
        """Resets the simulation and the robot.

        Parameters
        ----------

        pos: np.ndarray:
            Initial joint positions of the robots
        vel: np.ndarray:
            Initial joint velocities of the robots
        mount_position: np.ndarray:
            Mounting position for the robots
            This is ignored for mobile robots
        mount_orientation: np.ndarray:
            Mounting position for the robots
            This is ignored for mobile robots
        """
        super().reset(seed=seed, options=options)
        self._t = 0.0
        if mount_positions is None:
            mount_positions = np.tile(np.zeros(3), (len(self._robots), 1))
        self.mount_positions = mount_positions
        if mount_orientations is None:
            mount_orientations = np.tile(
                np.array([0.0, 0.0, 0.0, 1.0]), (len(self._robots), 1)
            )
        if pos is None:
            pos = np.tile(None, len(self._robots))
        if vel is None:
            vel = np.tile(None, len(self._robots))
        if len(pos.shape) == 1 and len(self._robots) == 1:
            pos = np.tile(pos, (1, 1))
        if len(vel.shape) == 1 and len(self._robots) == 1:
            vel = np.tile(vel, (1, 1))
        for i, robot in enumerate(self._robots):
            checked_position, checked_velocity = robot.check_state(pos[i], vel[i])
            robot.reset(
                pos=checked_position,
                vel=checked_velocity,
                mount_position=mount_positions[i],
                mount_orientation=mount_orientations[i],
            )
        self.reset_obstacles()
        self.reset_goals()
        return self._get_ob(), self._info

This should solve it. I think my mistake was changing all the instances of dict. I think I overcomplicated the code by doing so. Good job again!

maxspahn commented 1 year ago

@behradkhadem , great that you are so responsive!!! It helps a lot! I'll integrate it today. Actually, I should simply make you a contributor, that you can change it!

behradkhadem commented 1 year ago

@behradkhadem , great that you are so responsive!!! It helps a lot! I'll integrate it today. Actually, I should simply make you a contributor, that you can change it!

I'd be happy to help. I tried pushing code but I didn't have the access.

ERROR: Permission to maxspahn/gym_envs_urdf.git denied to behradkhadem.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
maxspahn commented 1 year ago

I sent you the invite to become a contributor. Please continue discussing merge requests and don't abuse the power ;)

behradkhadem commented 1 year ago

I sent you the invite to become a contributor. Please continue discussing merge requests and don't abuse the power ;)

I thought you meant I become a contributor on this branch. I really didn't expect that! 😲🤯 Thanks a lot! I think this made my day!

PS: I pushed.

behradkhadem commented 1 year ago

Dear @maxspahn, everything seems fine to me. I think now we can use this package for RL purposes. I'll try to do a simple pick and place using panda robot. Just a few questions:

Many thanks again!

maxspahn commented 1 year ago

We can import and use our own URDF files, can't we? I'm asking because I'm planning to design a robot for my masters project (in SolidWorks) and then convert it to a URDF file.

Yes, you can. However, the robot you are creating must be of one of the supported types (holonomic, diff-drive or bicycle model).

Do we have a camera sensor? I remember seeing something related to it inside the package, but I fail to find it now.

No, there is no camera sensor so far.

Are there anything else that I could help developing? I suggest adding a few .md files for future steps and contribution conditions.

Maybe, you could add the camera sensor :smile: . Those contributing files would also be nice to have.