-
Hello. I am just curious why the observation space of Maze Ant has a shape 30? The original Ant has 111.
Is something excluded? If yes, how could I use the full observation space?
Thanks in advan…
-
**Important Note: We do not do technical support, nor consulting** and don't answer personal questions per email.
Please post your question on [reddit](https://www.reddit.com/r/reinforcementlearning/…
-
It would be great if the natural language messages from the chat box could be added into the observation / action space of the agents.
-
I think that a table with a basic problem description would greatly improve the package.
Something like the following:
### MDPs
problem | space | action
----------|----------|----------
GridW…
-
Is the OffPolicyAlgorithm now supported, as well as the observation space supported as dict?
hpugq updated
7 months ago
-
I'm having an issue initializing an AsyncVectorEnv for a custom environment which uses pybullet. I should specify that I am using `gym==0.15.3` as this is the only compatible one for the environment I…
-
### What happened + What you expected to happen
When a discrete action space is used with a non-zero start value, the actions generated by the Ray policy algorithm does not respect this, and as a res…
-
Hi, I am trying to define a observation space in my custom env that,
I have an agent flying in a 3-d space, and there are serveral obstacles in the space, and the agent cannot observe all obstacles i…
-
Gym's rules regarding space has been strengthened
e.g.
gym==0.24 requires observation to be ONLY `np.array`
and, since gym==0.20 `observation_space`'s attribute is protected(can not be set la…
-
Hi,
I run this file.
python3 carla_env_test.py
and I got this error:
logger.warn(f"{pre} is not within the observation space.")
Traceback (most recent call last):
File "carla_env_test.p…