-
Hi @alex-petrenko,
Thank you for such a great code!
I'm working on some RL algorithms (intrinsic reward-based ones for the MiniGrid environment) based on the version [here](https://github.com/ka…
-
Dear Jakob
I've successfully run the project on IIC-OSCI-TOOLS,and the RL_placement is done,I have got the DiffAmp_placement pkl file.However, when I try to open this pkl file in magic tools,as you…
-
### What happened + What you expected to happen
Setting `use_kl_loss=False` in PPO with new RL Module & Learner API fails due to an impossible-to-satisfy `assert` statement. Since line 500 in `ray.rl…
-
Hi everyone,
As a professional who has worked with a few RL frameworks in the past, I can confidently say that this is one of the cleanest, most user-friendly, and advanced RL library I've encount…
-
In the installation page, it says I need to create python 3.6 environment using `conda create -n spinningup python=3.6`, but I couldn't find a way to install python 3.6 with conda, both defaults and c…
-
-
I am interested in using Flow for VANETs (Vehicular Ad hoc NETworks) routing protocols, which play a key role in the design and development of Intelligent Transportation Systems. Besides RL,
genetic…
-
### What happened + What you expected to happen
I tried to run a demo example with attention net using PPO algorithm with RepeatAfterMeEnv and it is giving an error on first iteration of execution.…
AvisP updated
11 months ago
-
My understanding is that most RL algorithms will focus on supporting gymnasium going forward and that will be the standard. Trying to get ray rllib or other RL libraries with gym environments is prett…
-
There we will track out progress for [Minari](https://github.com/Farama-Foundation/Minari) integration with CORL. Minari is a standard format for offline RL datasets, with popular reference datasets a…