-
I'm interested in creating a framework that would maximize the number of games that could be represented and allow for as much code re-use as possible. Gym is already providing a few classes to help w…
-
Really cool that you've been working on implementing that algorithm in Python. I've been thinking of doing this as well. As far as I can tell, you're the only one that's tried doing this yet, so I'm …
-
**Submitting author:** @alexanderimanicowenrivers (Alexander I. Cowen-Rivers)
**Repository:** https://github.com/for-ai/rl
**Version:** v2.0
**Editor:** @mbobra
**Reviewers:** @desilinguist, @paragkul…
-
Hello, does the lib support multi-agent environment?
Or more precisely, allow multiple agents share environment state, select their action in parallel, then return the combined actions to the environ…
-
Thank you for these useful examples. I am trying to implement D4PG in multiple agents that interact with each other, share the same reward, but each agent takes its own actions. I wonder if you had an…
-
First of all, I'm not really sure whether this is a problem on my side or a bug on your side. But I'm trying to debug this for some days now and I really don't know what to do anymore.
**Bug descri…
-
After executing python driving_benchmark.py -c Town02 -v -n test as per your instruction, landed the error below. Kindly suggest how to proceed...
Built Task Block 0_red_light
Traceback (most re…
-
This one looks interesting.
> In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters. A key challenge is to handle the …
-
When I run the code against intraday data, where data is a 1-minute interval and when fit() executes 75% of the time I am seeing the same error below and rewards are showing as 0.00
RuntimeWarning:…
-
Can I understand -n command as an extension to multi-seed running? It seems to me that multi-processing is like creating more uncorrelated samples for training, and run multiple tests (possibly with d…