-
I'm trying to implement a PPO agent to play with LunarLander-v2 with tf_agents library like it was in [this tutorial](https://pylessons.com/LunarLander-v2-PPO/) ([_github repo_](https://github.com/pyt…
-
Anyone know what's wrong of using tf-agent here which trigger the ValueError?
ValueError: Inputs to TanhNormalProjectionNetwork must match the sample_spec.dtype.
In call to configurable 'SacAgent'…
-
## Describe the bug
When training on `PettingZoo/MultiWalker-v9` with `Multi-Agent Soft Actor-Critic`, **all** losses (`loss_actor`, `loss_qvalue`, `loss_alpha`) explode after ~1M environment steps…
-
Thanks for you open-sourced code very much.
Recently, I want to apply the model used for breakout to other games, but I find that different games have different action Spaces, which will lead to erro…
-
for example in finrl\agents\elegantrl\models.py:11
---> 11 from elegantrl.train.config import Arguments
ImportError: cannot import name 'Arguments' from 'elegantrl.train.config'
-
**Description**
Self Explanatory. You can find a disk that makes you a mag agent similar to GO3LM / Mag Soldat.
**Reasoning**
Obvious reasoning, just for some cool gameplay! Who doesn't want a di…
-
Hello,
Thanks for the awesome product and especially the _trains-agent_.
I have a question/issue regarding the persistency of the agents:
**Background**: I have an Ubuntu machine, running s…
-
File "/home/moderngangster/Codes/APC-Flight/ElegantRL/examples/../elegantrl/agents/AgentSAC.py", line 43, in update_net
obj_critic, state = self.get_obj_critic(buffer, self.batch_size)
File …
-
I got this error that occurred while running the training part.
I find it a particular one because was the first and only time that I saw it, so is about some specific configuration of the agents.
…
-
# Why
#### As a
user of `pyCMO`
#### I want
to be able to specify different reward models for my scenarios
#### So that
I can train RL agents
# Acceptance Criteria
#### Given
we currently only expo…