Closed Smileynator closed 4 years ago
That CPU should be fine for training. How big is your neural network?
After quitting your run, there should be a *.json file in your summaries directory - it should contain some timings for parts of your code - if you're still having problems, would you mind posting the output of that? Thanks!
I am just going to share that. It's not the most readable to me as to which is which. It is saying my unity environment.step takes 81 seconds? To clarify, this is a totally empty scene, with 1 agents and 1 academy. it's a simple car and a plane to drive it on. On my end there should not be happening anything intensive.
What i must add, is that i half set up the environment once before. and thus the project had to be upgraded from a version that used "brain" files, to the new thing. Maybe that upgrade did something wacky? (even though i replaced all of the code of ML-agents, and have not altered mine IIRC)
I think the neural network settings are the default, i did not bother adjusting it to a custom set yet. And like i said, 3 input floats, 3 output floats.
{ "name": "root", "gauges": [ { "name": "CarBrain.mean_reward", "value": -1.0, "min": -1.0, "max": -1.0, "count": 10 } ], "total": 196.0640960826856, "count": 1, "self": 27.23873248419713, "children": [ { "name": "TrainerController.advance", "total": 168.82536359848848, "count": 50001, "self": 26.59743871954896, "children": [ { "name": "env_step", "total": 138.54210463491776, "count": 50001, "self": 106.10325752416313, "children": [ { "name": "SubprocessEnvManager._take_step", "total": 31.966718216435595, "count": 50001, "self": 0.7935256180679602, "children": [ { "name": "PPOPolicy.evaluate", "total": 31.173192598367635, "count": 50001, "self": 31.173192598367635 } ] }, { "name": "workers", "total": 0.47212889431903626, "count": 50001, "self": 0.0, "children": [ { "name": "worker_root", "total": 193.0269196876839, "count": 50001, "is_parallel": true, "self": 111.64083326897189, "children": [ { "name": "UnityEnvironment.step", "total": 81.386086418712, "count": 50001, "is_parallel": true, "self": 8.594779898282425, "children": [ { "name": "UnityEnvironment._generate_step_input", "total": 2.0340816534307233, "count": 50001, "is_parallel": true, "self": 2.0340816534307233 }, { "name": "communicator.exchange", "total": 70.75722486699885, "count": 50001, "is_parallel": true, "self": 70.75722486699885 } ] } ] } ] } ] }, { "name": "update_policy", "total": 3.6858202440217553, "count": 4, "self": 2.8439364363836717, "children": [ { "name": "PPOPolicy.update", "total": 0.8418838076380837, "count": 120, "self": 0.8418838076380837 } ] } ] } ] }
any clue what is going on? :sweat_smile:
Hi @Smileynator, it seems like your neural network code is running quite fast (PPOPolicy.evaluate
and update_policy
. That means it's most likely the Unity executable itself. Can you try running with one of the example environments and seeing if it's running slowly? Does it also run slowly without Python?
Ah that actually got me somewhere. @ervteng Just to document what happened. I set this up in the editor. With the settings set to 100x timescale, and unlimited FPS. I think mainly the timescale here, causes something to mess up when one of the two can not keep up. My expected behaviour here would be "Run as fast as you can possibly manage, capping at 100x normal game speed" The behaviour instead was something like "chug at 0.3FPS".
Is this something i misunderstood, or is it simply that the neural network and editor are not synced up? I would personally expect unity to call evaluate once per frame/physics step (whatever is set up) Maybe it has something to do with max physics step and these insanely high settings causing a conflict of unreachable desired speed, and as a result the physics system could not get it's calculations in at the fixed timestep, etc. etc.?
Explaining here what exactly went wrong compared to expectations, might be a great insight for others.
PS: in the docs for the config file it says:
batch_size | The number of experiences in each iteration of gradient descent.
What is an "experience". Amount of time a reward is given? Times agent's Done() is triggered? Steps? Depending on what this means, i want to set it vastly different.
@Smileynator an experience is a single action and observation.
I think what you're seeing isn't a slowdown - it's a behavior of the rendering system at high timescales :P Hopefully what you're also noticing is that things are jumping around in the game. This means the simulation is still running fast, it just is only rendering to the screen at 0.3 FPS. Since you're not using the rendered image as observations for your agent, you're still training.
Making the game into a build (rather than running in the editor) should provide a decent speedup as well.
@ervteng As much as i wish that to be true, it looked to me as if the game was making proper linear progression at a low pace, instead of resetting and teleporting agents. But i will check it later once i got the learning down now that it runs normally from my perspective.
I am aware of the benefits of running multiple build clients. But in debugging and testing i rather keep it in editor until i feel i have it roughly working right. Might need some good eductional books on the subject :P
Thanks for all the help, and thanks for the clarification on the term. If it's not in the docs, please put it somewhere so people know what experiences are in the context, i was out of the loop!
This issue has been automatically marked as stale because it has not had activity in the last 14 days. It will be closed in the next 14 days if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had activity in the last 28 days. If this issue is still valid, please ping a maintainer. Thank you for your contributions.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
I have set up ML-agents a few months ago on an office machine with no GPU, but build in the current year. And ran some tests on that just fine. Ran like 6 game instances and trained an AI with mild slowdown at best.
Just now i did the same with my home setup, with the newest builds and installs of all the programs. My computer is a decent gaming rig that is getting a bit dated, and i have not set up any CUDA stuff yet (if that even works for training?) It is running an i5 4690K at 3.5 Ghz 4 Cores/4 Threads.
For some reason though when i run a nearly empty scene, with just 1 agent and 1 academy, with only 3 inputs and 3 outputs on continuous. I get a 300ms per frame in the editor. All time is claimed to go to the decision making process. And my CPU is at about 70% during all of this.
Did something change? Is something different from my last run. or is my CPU really that underpowered for doing ML?