-
Sorry, this might be a trivial question
I am trying the recurrent PPO examples with my own single environment. In any case I will get this size error.
```
File "ppo_lstm", line 66, in compute
rn…
-
I know there are better places to ask than this, but is there a recommended resource list that talks and guides through the implementation details of an RL example from scratch? The examples provided …
-
Hi @Toni-SM ,
First of all, thank you for this excellent, well-documented library!
I might be off here, but when playing with the [`sample`](https://github.com/Toni-SM/skrl/blob/6b8b70fc2f5fd130…
-
Hi @Toni-SM I'm new in RL
but , do u have any example of code for PPO discrete action space?
Thx
-
Hi, I notice that MAPPO is supportted in 0.11.0, and I'm really eager for using MAPPO algorithm in NVIDIA Isaac Sim, but this work may have not shown for us. Could you please tell me what time may I u…
-
Hi, I am looking into using skrl+isaacgym as future research tools. Many thanks to the authors for providing such a quality library.
I am a bit confused by the implementation of IssacGymPreview4Wra…
-
```
(orbit2) kaito@comet:~/Documents/Expt/Orbit/Project_Code$ orbit -p ppo_lift_franka.py
[INFO] Using python from: /home/kaito/mambaforge-pypy3/envs/orbit2/bin/python …
-
**Change sarsa_gym_taxi.py or sarsa_gymnasium_taxi.py to reproduce the problems:**
1. env = gym.make("Taxi-v3", render_mode="human") # set the render mode
2. cfg_trainer = {"timesteps": 100, "head…
-
### Describe the bug
Hello, I've found two bugs in Orbit. I'm not sure that description is certainty, but I hope this Bug Report can help you fix these bugs in the next version.
1. Bug of body_i…
-
Hi @Toni-SM
While utilizing TD3 agent on the pendulum-v1 env with the default config, I checked that "smooth_regularization_noise" is None in default config.
however, that cfg make this below e…