-
So after I trained my model with ML-Agents, I took the .nn file and I set it in the Model section of the Learning Model
Initially it was set to text asset and thats why I changed `ENABLE_TENSORFLOW` …
-
Hello, and thank you for all your hard work.
I have some existing C++ simulation software, and I'm trying to use Unity trained agents to control things in it (I re-created portions of the C++ simul…
-
Are there any examples of using TF-Agents with Atari? Just an end-to-end "Hello World" Breakout game would be very helpful. There used to be:
```
tf_agents/agents/dqn/examples/v1/train_eval_atar…
-
Hi, in _update_branch_ function, you update the parent node value with the same _total_reward_ with the current node. Why? I think they have different values, because action from parent node to curren…
-
Hi I'm trying to run SAC Discrete and I keep getting following error
```
Warning: Error detected in AddmmBackward. Traceback of forward call that caused the error:
File "results/Cart_Pole.py", …
-
At this point, we have an agent who can observe its environment and take some actions, like moving to the left. But now, we want the agent to learn a circus trick in this environment.
You can see …
-
Hi,
Coming of reading this blog post [here](https://blogs.unity3d.com/2019/04/15/unity-ml-agents-toolkit-v0-8-faster-training-on-real-games/), I came to wonder if there are a difference, and what i…
-
Hello!
I have been working a bit with XPlane11 recently and wanted to see if JSBSim would work better, as there are features that I prefer, such as stepping physics myself and my own speed, and direc…
-
**Describe the bug**
If I interrupt training and then attempt to resume using the --load parameter, there is a dip of random size in the mean reward. This dip was not there in version .8. It is there…
-
In GitLab by @MLNW on Oct 15, 2019, 07:44
flake8 was installed via `pip3 install -r requirements.txt` while the `requirements.txt` contains the following entries:
```
attrs==18.2.0
coverage==4.5.4
f…