-
Hi All,
I am working on car racing game and i am using ML 0.8.2 version but car is not learning at all.
I have created boundary and using RaycastPreception3D but still no progress.
Please help …
-
Hi, I tried the SAC trainer and I get NaN rewards whenever it updates (image attached). My environment is returning valid rewards and the issue does not exist for PPO. Any idea what could be wrong?
…
-
Hi,
I updated my environment from 0.11 to 0.12 and TF from 1.14 to 2.0.
No issues training at all, but GPU inference is not working as before (same performance (as cpu inference) and poor resul…
-
**Is your feature request related to a problem? Please describe.**
Executing the barracuda models is the biggest performance cost in my project. Using 3 layer by 512 node models, here are the barracu…
-
## Description
I run Arch Linux, sound devices are configured, but I don't hear any sound. When I join the game, I see the following message:
```
UnityAgentsException: The Communicator was unable t…
-
Hi. I was trying a few things.
Let me tell you what I've been doing.
1. make mlagents executable environment. FoodCollector.
2. Learn from outside with a pytorch.
3. Convert the pytorch model to…
-
Hi all,
I'm trying to figure out when to exactly use a visual observation instead of a vector observation.
In my case I just have a 36 x 36 grid of black and white pixels, with the agent moving ar…
-
Hi all,
Would it be possible to automatically save a .nn file every time a checkpoint is made. That way I can have one PC training until infinity and basically develop the game further using the la…
-
안녕하세요. 계속 질문을 드리게 되네요,..
아무래도 자료도 많이 없고 그래서 죄송합니다.
질문은 아래와 같습니다.
> env_info = env.reset(train_mode=train_mode,
config=sokoban_reset_parameters[game_level])[default_br…
-
**Describe the bug**
Using --load rewrite the entire csv status summary making it useless. When using --load option it should either create another .csv with the newer data (much like tensorflow doe…