-
Opening this issue to start a discussion about whether it would be worth investing to make it easy to run tensorflow agents K8s.
For some inspiration you can look at [TfJob CRD](https://github.com/…
jlewi updated
6 years ago
-
亲爱的作者,我在研究Inverter-based Volt-Var Control时发现了你的程序,请问这是哪篇论文的代码?非常期待您的回复,祝你身体健康,工作顺利。
Dear author, I found your program while researching Inverter-based Volt-Var Control. Could you please tell me whi…
-
Hello SMART Lab!
I really enjoyed your NaviSTAR paper! I was playing around with the code and I had a question I was hoping you may please help me resolve.
When I start from train_NaviSTAR.py an…
-
In several cases actor and critic coupled together into one agent class. Instead, they should be seperated and placed in their own fields of the config.
-
## Keyword: sgd
There is no result
## Keyword: optimization
### A Model-Constrained Tangent Manifold Learning Approach for Dynamical Systems
- **Authors:** Authors: Hai Van Nguyen, Tan Bui-Thanh
-…
-
I think the newest version of the code of thie work is the repo "Review".
but before adding treigger into the datatset, we need to get the weak-performing agent.
I dont find the code of how to get…
-
**Error got when I ran train_ppo_llama.sh**
```
set -x
read -r -d '' training_commands
-
### Describe the problem
Currently the loss values are inconsistently reported across algorithms in metrics. We should always report (1) total loss, (2) policy / vf loss if present (or actor / crit…
ericl updated
5 months ago
-
### What happened + What you expected to happen
What happened:
Got a couple of hick-ups when trying out minimal example of the unity3d_env_local.py with ML-Agents 3DBall project:
1. `env_runners `n…
-
In https://github.com/alpine-chamois/actor-critic/blob/main/src/actorcritic/actor_critic_agent.py `train`, the `predicted_next_value` looks like it uses an out of date `value` from before the latest a…