-
Hello there, Lorenzo!
Your code was very well written and well documented in the report.
I liked how you use different agents trained against each other. I think the reason behind `rl_base` outperfo…
-
Hello, when I tried to run your code, I encountered the following error. I believe this is an issue with the version of rl_ agents. If possible, could you share your version number?
-
Hey,
In the last step of the readme, there's a mention of `analyze.py` file which is missing in the rl-agents repo. Could you please share a link to it so I can compare the results of the three algo…
-
I propose we do a user guide for rlberry. The outline of which would be something like this:
* Installation
* Basic Usage
* Quick Start RL
* Quick Start Deep RL
* Set up of an experiment
…
-
### What happened + What you expected to happen
Using the new V2 API stack raises a `NotImplementedError` in `BatchIndividualItems(ConnectorV2)`
### Versions / Dependencies
Google Colab, Pyth…
-
I am trying to train a ppo agent using this [repository](https://github.com/HumanCompatibleAI/human_aware_rl), which is a repo for the DRL implementations compatible with a multi-agent environment (Ov…
-
Thank you yosider for sharing this very interesting repo!
Would you consider implementing simpler agents "RL-LSTM" and/or "RL-Mem" from the paper? They would make good baselines and are simpler to…
-
Hi,
I have been running reinforcement learning and multi-agent RL for one of my projects by implementing a custom env. One of the crucial requirements for my project is that agents have distinct ac…
-
Hi,
thanks for the amazing work of RL environments using JAX. I was wondering if you have any plans to write Actor-Critic agents for this work?
-
Here is an example of how to do that with active racking using Deep RL:
https://towardsdatascience.com/cubetrack-deep-rl-for-active-tracking-with-unity-ml-agents-6b92d58acb5d