-
Hello, I want to replicate the results of Table 2 on your paper - in particular the performances of the Seq2Seq and BUTLER agents. After I have trained the two agents, what scripts should I run to rep…
-
Hello, I have been reading through this repository's documentation, and I understand it is possible to use an OpenAI Gym interface for training reinforcement learning agents. `keras-rl` seems like a d…
-
Is this project active? (I don't see any other way to message Josiah.) I've been thinking of working on something similar but would rather contribute to an existing project than start from scratch. Bu…
-
### Question
I'm trying to implement sampling and training asynchronously using the SAC algorithm. I made the attempt shown in the code below. But I always get an error because there seems to be a …
-
I've tried to make the environment work with the baselines fork stable_baselines (https://github.com/hill-a/stable-baselines). It runs, but the results shown when I'm running plot_energyplus is always…
-
Hi @EndingCredits,
this is really cool that you got the `NEC` working :+1:
Have you tried to run your code on the Atari environments, in Open AI gym?
I tried to train on `Pong`, but I got th…
ghost updated
7 years ago
-
### Cautions:
**Before starting the task, please refer to [Add data of ML-YouTube-Courses](https://github.com/orgs/ocademy-ai/projects/3/views/1?filterQuery=label%3Adata&pane=issue&itemId=36101499)…
-
Let's revision Bolts and breathe some fresh air into them! As outlined in #819 and on a Slack channel, we will revisit every single feature within Bolts.
Please sign up for a feature which you'd li…
-
## Motivation
TorchRL cannot handle environments with `gymnasium.spaces.Tuple` observation spaces. I think these are fairly common outside of MuJoCo/Atari envs.
## Solution
Support for tuple …
-
## Without the primer, the collector does not feed any hidden state to the policy
in the [RNN tutorial ](https://github.com/pytorch/rl/blob/main/tutorials/sphinx-tutorials/dqn_with_rnn.py) it is st…