openai / neural-mmo

Code for the paper "Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents"
https://openai.com/blog/neural-mmo/
MIT License
1.57k stars 262 forks source link

GPU acceleration on training and rendering #17

Closed shuruiz closed 5 years ago

shuruiz commented 5 years ago

Based on my test run on Forge. The training and rendering speed are slow (I run it on a Macbook pro). As MMO is using Pytorch framework, is it possible to support GPU acceleration to boost training as well as rendering.

jsuarez5341 commented 5 years ago

Yes: GPU acceleration should already be active for rendering. Yes: GPU acceleration is supported for model code. No: it probably won't help for the included architecture.

You can move the model to GPU with .cuda(), but the model is small, you'll incur some communication overhead, and you would also have to do some annoying batching. A main advantage of the native API at small scale is that it incurs zero communications overhead outside of syncing gradients. If you take a look at the native API as described in the readme, it pins an environment and all associated rollouts to each CPU core. It's fully possible to train on a 4-6 core CPU in a few days. That's how I trained the default sample model that's provided with the repo.

As for why the renderer is slow: it's not super optimized yet. It runs beautifully on a good desktop, but maybe not yet on laptops. Hopefully I'll get to this in the near future -- it's on my short list of todos.