-
I am trying to adapt the SAC minitaur tutorial which uses the Actor-Learner API and reverb to work with the PPO agent. I changed the `tf_agent `from `sac_agent.SacAgent` to the `ppo_clip_agent.PPOCli…
-
I've set `use_gpu = True`, but the GPU useage is almost close to zero when running the code. When I look into tensorboard, it shows that all operations are assigned to CPU. Then I disable `sess_confi…
-
When I run the first chank of the code titled "Creating the DQN" in Chapter 18, I got the following error:
```
---------------------------------------------------------------------------
ValueErr…
-
Currently I have a huge dilemma:
- backport all my code to TF 1, in order to use Stable Baselines and my code in one project
- or use something less mature than Stable Baselines (eg TF Agents) only …
-
I tried to use the `replay_buffer.as_dataset()` the same way as the TD3 example:
https://github.com/seungjaeryanlee/agents/blob/c0ee15815d4e43596513b8038ff095bda522bcd5/tf_agents/agents/td3/example…
-
Hello.
Thanks for your codes, but I got some errors.
I’m using your `env.py`, so I renamed the `env` folder and changed `from env import envs` to `import env as envs`.
Is it correct?
Since I…
oatuy updated
4 years ago
-
I'm trying to convert a custom gym project (called BTgym) to work as a tf-agent env.
the original observation space and the action space are both `gym.spaces.Dict`.
but for the moment I have simplif…
-
Type: Performance Issue
Memory consumption on the Apple Silicon Mac skyrockets at 3.5+ GB once I open a single small-size ARM template into VS Code.
Extension version: 0.15.11
VS Code version: …
-
I have been trying to run the example by following,
```
python tf_agents/agents/categorical_dqn/examples/train_eval_atari.py \
--root_dir=$HOME/atari/pong \
--alsologtostderr
```
However…
-
I have a custom RL environment. The action space is (a1, a2). I need to implement two different algorithm for the policy of each action. Suppose a1 comes from policy p1 and a2 comes from policy p2.…