-
**edit 3**
bug is reproduced here https://github.com/FlimFlamm/ml-agents
the standard pushblock scene is configured with a crawler agent, and it has use_recurrent set to t rue
**edit 2**
I …
-
**Describe the bug**
We are trying to use visual observation for training. Our env is base on this sample https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Learning-Environment-Create-…
-
**Context**
I can successfully use the provided UnityEnvironment() to load the sample binary. However, when it come to some game binary I built myself, it throw errors that I don't know how to procee…
-
Whether I run using a build or in the Editor using the Python API, the simulation is much slower than when using mlagents-learn. I'm using a PyTorch implementation of DDPG and using Cuda10.1. Is this …
-
Under certain conditions, the terminal will fail to properly load multiple agent behaviors in multi-agent environments. I have not determined the exact cause of this bug, but I have discovered reliabl…
-
I've trained the AI for my slime game ([here](https://www.nosuchstudio.com/slime/)) with mlagents. To use the trained model, I need to keep the academy object in my scene. The Academy object "ties" fi…
-
"INFO:mlagents.trainers: block5-0: BlockBrain: Step: 0. Time Elapsed: 99.064 s Mean Reward: -7.901. Std of Reward: 2.661. Not Training."
what does this not training mean?
Can anyone help me?
-
Hi there!
My pipeline used to use version 0.12 and now I'm upgrading to the Release 1. I only use the Python UnityEnvironment class to interact with the binary file. I don't use the mlagents traine…
-
I am looking to train an agent to have both continuous and discrete actions. For example, the agent should be able to run at varying speeds in X and Z in any direction and decide whether or not to jum…
-
When i import 'ML-Agents' file into my project and migrate my project into 0.11 version, i run this python script and click the play button, but it seems like my unity editor cannot communicate with m…