-
I can evaluate on architecthor successfully, and I would like to evaluate on robothor dataset by switching:
evaluation.tasks=["architecthor"] to robothor and I obtained the following error:
P…
-
- [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] RL algorithm bug
+ [ ] documentation request (i.e. "X is missing from the documentation.")
+ [X] ne…
-
**Describe the issue**:
I am currently facing an issue with NNI hyperparameter optimization, where all trials are failing for my deep learning model implemented in TensorFlow Keras. I have attempted…
-
**Describe the bug**
In the scripts rl_baselines/rl_algorithm/sac.py, deepq.py, ddpg.py:
def customArguments(self, parser):
## forget to add: super().customArguments(parser)
**C…
ncble updated
5 years ago
-
As I've been taking lots of notes while reading papers related to Rainbow, I thought I'd set up the documentation website and flesh it out gradually. I'll link a pull request with a first version of t…
-
### What happened + What you expected to happen
I tried to run a demo example with attention net using PPO algorithm with RepeatAfterMeEnv and it is giving an error on first iteration of execution.…
-
Dear author, I am implementing the Multi-agent settings using the Highway-v0. I am not able to achieve stable training and the vehicles can run off the roads without terminating the environment. I too…
-
Dear CORL Team,
Firstly, I would like to express my appreciation for your work on the CORL codebase. The clean, single-file implementation coupled with a robust performance report has greatly impre…
-
### What happened + What you expected to happen
The new API stack for RLlib seems to have challenges with observation wrappers, which are quite handy for action masking models. Unlike #44452, it is n…
-
By training on hypothetical world models, it could be that we need less data from the original environment . Does our algorithm actually need less samples than a typical RL on the real world model? Us…