-
### What happened + What you expected to happen
I get some error: Action space MultiDiscrete([11 5 1 2]) is not supported for DQN.
### Versions / Dependencies
ray: 2.6.3
### Reproduction script…
-
I'm trying to run the following code and test PPO with Sonic the hedghehog, running it in parralel with SubProcVecEnv
Unfortunately I run in the following error:
```
Traceback (most recent call las…
-
I am working on trying to implement the L2M2019 challenge, and I have an issue where, if I try to run
```Python
from osim.env import L2M2019Env
env = L2M2019Env(visualize=True)
observation = en…
-
When I execute it in the terminal : $python3 train.py -e configs/empty.yaml -a dqn -c agents/DQN.py -t mlp
It shows the error:
/home/jylong/.local/lib/python3.8/site-packages/gym/utils/passive_e…
-
For architecture search across a variety of environments, it's crucial to access the parameters of the observation and action spaces.
How do you do that with this library?
-
## Describe the New Feature ##
This task is to create a python script to read NRL innovation files and use python embedding to pass them into the ascii2nc tool. Once that script works well, collabora…
-
Hi, I would like to ask the difference between states, observations, and amp_observations.
My understanding is that the state space is not defined for the humanoid task and just the observation spa…
-
## Bug report
**Required Info:**
- Operating System:
- Linux (custom)
- ROS2 Version:
- Iron sources
- Version or commit hash:
- 1.2.9
- DDS implementation:
- CycloneDDS
#### S…
-
Hi,
I don't have any trouble getting the env viewer to work, but when I try to actually train the models I get issues. When I run the line:
"python3 -m assistive_gym.learn --env "FeedingSawyer…
-
The current AutoCAT does not scale well. In improvement is to define the observation spaces for policy net and value net differently. During deployment of the model and doing inference, only the polic…